Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.