Abstract
Most cognitive scientists believe that cognitive processing (e.g., thought, speech, perception, and sensori‐motor processing) is the hallmark of intelligent systems. Aside from modeling such processes, cognitive science is in the business of mechanistically explaining how minds and other intelligent systems work. As one might expect, mechanistic explanations appeal to the causal‐functional interactions among a system's component structures. Good explanations are the ones that get the causal story right. But getting the causal story right requires positing structures that are really in the system. After all, within the context of a mechanistic explanation, to posit X is to claim not only that X exists but that X is doing some causal labor. Because most cognitive scientists believe that cognitive processing requires the use, manipulation, and storage of internal representations, they characteristically posit internal representations to explain how intelligent systems work.