화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.45, No.2, 217-234, 2000
Some contributions to fixed-distribution learning theory
In this paper, we consider some problems in learning with respect to a fixed distribution. We introduce two new notions of learnability; these are probably uniformly approximately correct (PUAC) learnability which is a stronger requirement than the widely studied PAC learnability, and minimal empirical risk (MER) learnability, which is a stronger;requirement than the previously defined notions of "solid" or "potential" learnability, It is shown that, although the motivations for defining these two notions of learnability are entirely different, these two notions are in fact equivalent to each other and, in turn, equivalent to a property introduced here, referred to as the shrinking width property, It is further shown that if the function class to be learned has the property that empirical means converge uniformly to their true values, then all of these learnability properties hold. In the course of proving conditions for these forms of learnability, we also obtain a new estimate for the VC-dimension of a collection of sets obtained by performing Boolean operations on a given collection; this result is of independent interest. We consider both the case in which there is an underlying target function, as well as the case of "model-free" (or agnostic) learning. Finally, we consider the issue of representation of a collection of sets by its subcollection of equivalence classes. It is shown by example that, by suitably choosing representatives of each equivalence class, it is possible to affect the property of uniform convergence of empirical probabilities.