Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes

Cognitive Science 36 (7):1204-1223 (2012)
  Copy   BIBTEX

Abstract

Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what he or she will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,448

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2012-04-10

Downloads
200 (#123,371)

6 months
10 (#381,237)

Historical graph of downloads
How can I increase my downloads?