Abstract
Perhaps we can all agree that in order for intelligent activity to be produced by embodied nervous systems, those nervous systems have to have things in them that are about other things in the following minimal sense: there is information about these other things not just present but usable by the nervous system in its modulation of behavior. (There is information about the climatic history of a tree in its growth rings--the information is present, but not usable by the tree.) The disagreements set in when we start trying to characterize what these things-aboutthings are--are they "just" competences or dispositions embodied somehow (e.g., in connectionist networks) in the brain, or are they more properly mental representations, such as sentences in a language of thought, images, icons, maps, or other data structures? And if they are "symbols", how are they "grounded"? What, more specifically, is the analysis of the aboutness that these things must have? Is it genuine intentionality or mere as if intentionality? These oft-debated questions are, I think, the wrong questions to be concentrating on at this time, even if, "in the end", they make sense and deserve answers. These questions have thrived in the distorting context provided by two ubiquitous idealizing assumptions that we should try setting aside: an assumption about how to capture content and an assumption about how to isolate the vehicles of content from the "outside" world