TY - GEN
T1 - Discounted expert weighting for concept drift
AU - Ditzler, Gregory
AU - Rosen, Gail
AU - Polikar, Robi
PY - 2013
Y1 - 2013
N2 - Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.
AB - Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.
KW - concept drift
KW - multiple expert systems
KW - nonstationary environments
UR - http://www.scopus.com/inward/record.url?scp=84885030861&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84885030861&partnerID=8YFLogxK
U2 - 10.1109/CIDUE.2013.6595773
DO - 10.1109/CIDUE.2013.6595773
M3 - Conference contribution
AN - SCOPUS:84885030861
SN - 9781467358491
T3 - Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments, CIDUE 2013 - 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013
SP - 61
EP - 67
BT - Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments, CIDUE 2013 - 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013
T2 - 2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments, CIDUE 2013 - 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013
Y2 - 16 April 2013 through 19 April 2013
ER -