Abstract
Knowledge of causal laws is expensive and hard to come by. But we work hard to get it because we believe that it will reduce contingency in planning policies and in building new technologies: knowledge of causal laws allows us to predict reliably what the outcomes will be when we manipulate the factors cited as causes in those laws. Or do they? This paper will argue that causal laws have no special role here. As economists from JS Mill to Robert Lucas and David Hendry stress, along recently with philosophers like James Woodward and Sandra Mitchell, they can do the job only if they are invariant under the manipulations proposed. But then, I shall argue, anything that is invariant under the proposed manipulations will do this job equally well. There seems to be nothing special about causal-law knowledge in and of itself that makes it particularly valuable for policy and technology prediction. What seems to matter is invariance alone, not causality. But what guarantees invariance and how do we know when it will obtain? Here certain kinds of causal laws do have a special place – those underwritten either by what I have called ‘nomological machines’ or by what I have called ‘capacities’. Capacities and nomological machines have a double virtue that makes them invaluable for policy planning. First, the causal laws they give rise to will be invariant so long as they obtain; and second, they typically have visible markers we can come to recognize that tell us when they obtain. The markers for nomological machines are shakier than those for capacities, though, since capacities are often tied to markers by well-established empirical laws. Capacities have their own drawback however, which is the final topic of this paper: the causal laws that are guaranteed by a capacity connect the obtaining (or triggering) of a capacity with its exercise. But Hume argued (mistakenly I suggest) that no distinction can be made between the obtaining of a power and its exercise.