How the Intrinsic Representation of Artiffcial Intelligence is Possible

Journal of Dialectics of Nature (No.247):41-47 (2019)
  Copy   BIBTEX

Abstract

Searle and some others believe that symbols’ meanings are derived from user’s assigning and interpretation, but not intrinsic to itself, and no object is a symbol by virtue of its physics. This criticism has brought about the "symbol grounding problem" in the philosophy of artificial intelligence. Computationalism believe that explicit symbols’ semantic contents are derived does not mean all the other kinds of symbols are also extrinsic. So, how is it possible that non-derivative, intrinsic representations are possible? By analyzing Peirce and some others’ meta-theories of representation, we can ffnd representation is essentially a multifunctional system of teleology. We try to analyze the mechanisms and forms of intrinsic representation and extrinsic representation by means of the structure of the system, benefft from which it can be drawn that intrinsic representations depend on self-organization, and extrinsic representations depend on the extrinsic purpose. It is necessary for intrinsic representational AI to construct a self-organizing computing system with intrinsic purpose.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-02-17

Downloads
123 (#184,791)

6 months
123 (#48,387)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Guanghui Li
Huazhong University of Science & Technology

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references