Machine understanding and the chinese room

Philosophical Psychology 1 (2):207 – 215 (1988)
  Copy   BIBTEX

Abstract

John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without the formal component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the Chinese Room argument.

Other Versions

reprint Newton, Natika (1989) "Machine understanding and the chinese room". Philosophical Psychology 2(2):207-15

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,388

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-03-08

Downloads
61 (#363,736)

6 months
5 (#702,808)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Natika Newton
Nassau Community College

References found in this work

Brainstorms.Daniel C. Dennett - 1978 - MIT Press.
Personal Knowledge.Michael Polanyi - 1958 - Chicago,: Routledge.
Intentional Actions and Plans.Myles Brand - 1986 - Midwest Studies in Philosophy 10 (1):213-230.

Add more references