Abstract
There has been a lively debate in the philosophy of science over _predictivism_: the thesis that successfully predicting a given body of data provides stronger evidence for a theory than merely accommodating the same body of data. I argue for a very strong version of the thesis using statistical results on the so-called “model selection” problem. This is the problem of finding the optimal model (family of hypotheses) given a body of data. The key idea that I will borrow from the statistical literature is that the level of support a hypothesis, _H,_ receives from a body of data, _D,_ is inversely related to the number of adjustable parameters of the model from which _H_ was constructed. I will argue that when _D_ is not essential to the design of _H_ (i.e., when it is predicted), the model to which _H_ belongs has fewer adjustable parameters than when _D_ is essential to the design of _H_ (when it is accommodated)_._ This, I argue, provides us with an argument for a very strong version of predictivism.