Abstract
The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed by the data — how are we to restrict the space of inductive hypotheses and choose effectively some rules that will probably perform well on future examples? We analyze if and how this problem is approached in standard accounts of induction and show the difficulties that are present. Finally, we suggest that the explanation-based learning approach and related methods of knowledge intensive induction could be, if not a solution, at least a tool for solving some of these problems