Large‐Scale Modeling of Wordform Learning and Representation

Cognitive Science 32 (4):741-754 (2008)
  Copy   BIBTEX

Abstract

The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based codes. Then, two large‐scale simulations are reported that learned to represent lexicons of either phonological or orthographic wordforms. In doing so, the models learned the statistics of their lexicons as shown by better processing of well‐formed pseudowords as opposed to ill‐formed (scrambled) pseudowords, and by accounting for variance in well‐formedness ratings. It is discussed how the sequence encoder may be integrated into broader models of lexical processing.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,388

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Connectionist Sentence Processing in Perspective.Mark Steedman - 1999 - Cognitive Science 23 (4):615-634.
Computational lexical semantics.Patrick Saint-Dizier & Evelyn Viegas (eds.) - 1995 - New York: Cambridge University Press.

Analytics

Added to PP
2013-12-30

Downloads
93 (#233,218)

6 months
17 (#151,358)

Historical graph of downloads
How can I increase my downloads?