Consider a situation in which an infinite data stream is being revealed to us one data point at a time and in which we are certain that this data stream has been selected from a certain countable family of possible data streams (hypotheses). Before each new data point is revealed, we are asked to predict what the next data point will be. This is a situation in which long-run learning is, in a sense, easy. One strategy that will always work is learning by enumeration: enumerate the hypotheses in play in order of (strictly decreasing) a priori plausibility; at each stage of inquiry, read off your prediction of the next data point from the most plausible hypothesis consistent with the data. No matter which hypothesis is true, you will from some point onwards always rely on the true hypothesis in making your predictions.  Another strategy that will always work is Bayesian learning: divide your prior credence among the hypotheses in play and update by conditionalization on the data seen. So long as your prior assigns positive credence to each hypothesis in play, in the limit of large data sets you will become arbitrarily confident in the true hypothesis. We will see that the performance of these strategies is less impressive if we impose the constraint that learning strategies must be computable.

connect with us

         

© UC Irvine School of Social Sciences - 3151 Social Sciences Plaza, Irvine, CA 92697-5100 - 949.824.2766