Results for 'probabilistic model of language'

969 found
Order:
  1. Probabilistic models of language processing and acquisition.Nick Chater & Christopher D. Manning - 2006 - Trends in Cognitive Sciences 10 (7):335–344.
    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   49 citations  
  2.  76
    A Probabilistic Model of Semantic Plausibility in Sentence Processing.Ulrike Padó, Matthew W. Crocker & Frank Keller - 2009 - Cognitive Science 33 (5):794-838.
    Experimental research shows that human sentence processing uses information from different levels of linguistic analysis, for example, lexical and syntactic preferences as well as semantic plausibility. Existing computational models of human sentence processing, however, have focused primarily on lexico‐syntactic factors. Those models that do account for semantic plausibility effects lack a general model of human plausibility intuitions at the sentence level. Within a probabilistic framework, we propose a wide‐coverage model that both assigns thematic roles to verb–argument pairs (...)
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  3.  77
    A Model of Language Processing as Hierarchic Sequential Prediction.Marten van Schijndel, Andy Exley & William Schuler - 2013 - Topics in Cognitive Science 5 (3):522-540.
    Computational models of memory are often expressed as hierarchic sequence models, but the hierarchies in these models are typically fairly shallow, reflecting the tendency for memories of superordinate sequence states to become increasingly conflated. This article describes a broad-coverage probabilistic sentence processing model that uses a variant of a left-corner parsing strategy to flatten sentence processing operations in parsing into a similarly shallow hierarchy of learned sequences. The main result of this article is that a broad-coverage model (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  28
    A Probabilistic Model of Lexical and Syntactic Access and Disambiguation.Daniel Jurafsky - 1996 - Cognitive Science 20 (2):137-194.
    The problems of access—retrieving linguistic structure from some mental grammar —and disambiguation—choosing among these structures to correctly parse ambiguous linguistic input—are fundamental to language understanding. The literature abounds with psychological results on lexical access, the access of idioms, syntactic rule access, parsing preferences, syntactic disambiguation, and the processing of garden‐path sentences. Unfortunately, it has been difficult to combine models which account for these results to build a general, uniform model of access and disambiguation at the lexical, idiomatic, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   65 citations  
  5. Radical Uncertainty: Beyond Probabilistic Models of Belief.Jan-Willem Romeijn & Olivier Roy - 2014 - Erkenntnis 79 (6):1221-1223.
    Over the past decades or so the probabilistic model of rational belief has enjoyed increasing interest from researchers in epistemology and the philosophy of science. Of course, such probabilistic models were used for much longer in economics, in game theory, and in other disciplines concerned with decision making. Moreover, Carnap and co-workers used probability theory to explicate philosophical notions of confirmation and induction, thereby targeting epistemic rather than decision-theoretic aspects of rationality. However, following Carnap’s early applications, philosophy (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. (1 other version)Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  53
    Qualitative and Probabilistic Models of Full Belief.Horacio Arlo-Costa - unknown
    Let L be a language containing the modal operator B - for full belief. An information model is a set E of stable L-theories. A sentence is valid if it is accepted in all theories of every model.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  8.  47
    From Exemplar to Grammar: A Probabilistic Analogy‐Based Model of Language Learning.Rens Bod - 2009 - Cognitive Science 33 (5):752-793.
    While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase‐structure trees should be assigned to initial sentences, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  9.  70
    A Probabilistic Computational Model of Cross-Situational Word Learning.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2010 - Cognitive Science 34 (6):1017-1063.
    Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  10.  78
    The Hidden Markov Topic Model: A Probabilistic Model of Semantic Representation.Mark Andrews & Gabriella Vigliocco - 2010 - Topics in Cognitive Science 2 (1):101-113.
    In this paper, we describe a model that learns semantic representations from the distributional statistics of language. This model, however, goes beyond the common bag‐of‐words paradigm, and infers semantic representations by taking into account the inherent sequential nature of linguistic data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural extension of the current state of the art in Bayesian bag‐of‐words models, that is, the Topics model (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11.  32
    Language polygenesis: A probabilistic model.David A. Freedman & William Wang - unknown
    Monogenesis of language is widely accepted, but the conventional argument seems to be mistaken; a simple probabilistic model shows that polygenesis is likely. Other prehistoric inventions are discussed, as are problems in tracing linguistic lineages. Language is a system of representations; within such a system, words can evoke complex and systematic responses. Along with its social functions, language is important to humans as a mental instrument. Indeed, the invention of language,that is the accumulation of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Probabilistic Modeling of Discourse‐Aware Sentence Processing.Amit Dubey, Frank Keller & Patrick Sturt - 2013 - Topics in Cognitive Science 5 (3):425-451.
    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  90
    The Logical Problem of Language Acquisition: A Probabilistic Perspective.Anne S. Hsu & Nick Chater - 2010 - Cognitive Science 34 (6):972-1016.
    Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  14.  30
    Completeness theorem for propositional probabilistic models whose measures have only finite ranges.Radosav Dordević, Miodrag Rašković & Zoran Ognjanović - 2004 - Archive for Mathematical Logic 43 (4):557-563.
    A propositional logic is defined which in addition to propositional language contains a list of probabilistic operators of the form P ≥s (with the intended meaning ‘‘the probability is at least s’’). The axioms and rules syntactically determine that ranges of probabilities in the corresponding models are always finite. The completeness theorem is proved. It is shown that completeness cannot be generalized to arbitrary theories.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  15.  58
    Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.Lau Jey Han, Clark Alexander & Lappin Shalom - 2017 - Cognitive Science 41 (5):1202-1241.
    The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  16.  37
    Statistical models of syntax learning and use.Mark Johnson & Stefan Riezler - 2002 - Cognitive Science 26 (3):239-253.
    This paper shows how to define probability distributions over linguistically realistic syntactic structures in a way that permits us to define language learning and language comprehension as statistical problems. We demonstrate our approach using lexical‐functional grammar (LFG), but our approach generalizes to virtually any linguistic theory. Our probabilistic models are maximum entropy models. In this paper we concentrate on statistical inference procedures for learning the parameters that define these probability distributions. We point out some of the practical (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17.  30
    A Computational Model of Early Argument Structure Acquisition.Afra Alishahi & Suzanne Stevenson - 2008 - Cognitive Science 32 (5):789-834.
    How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning mechanisms that can grasp generalizations from examples of specific usages, and that exhibit patterns of behavior over the course of learning similar to those in children. Early (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  18.  36
    Evaluating models of robust word recognition with serial reproduction.Stephan C. Meylan, Sathvik Nair & Thomas L. Griffiths - 2021 - Cognition 210 (C):104553.
    Spoken communication occurs in a “noisy channel” characterized by high levels of environmental noise, variability within and between speakers, and lexical and syntactic ambiguity. Given these properties of the received linguistic input, robust spoken word recognition—and language processing more generally—relies heavily on listeners' prior knowledge to evaluate whether candidate interpretations of that input are more or less likely. Here we compare several broad-coverage probabilistic generative language models in their ability to capture human linguistic expectations. Serial reproduction, an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  58
    A unified account of abstract structure and conceptual change: Probabilistic models and early learning mechanisms.Alison Gopnik - 2011 - Behavioral and Brain Sciences 34 (3):129-130.
    We need not propose, as Carey does, a radical discontinuity between core cognition, which is responsible for abstract structure, and language and which are responsible for learning and conceptual change. From a probabilistic models view, conceptual structure and learning reflect the same principles, and they are both in place from the beginning.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20.  38
    Modeling Reference Production as the Probabilistic Combination of Multiple Perspectives.Mindaugas Mozuraitis, Suzanne Stevenson & Daphna Heller - 2018 - Cognitive Science 42 (S4):974-1008.
    While speakers have been shown to adapt to the knowledge state of their addressee in choosing referring expressions, they often also show some egocentric tendencies. The current paper aims to provide an explanation for this “mixed” behavior by presenting a model that derives such patterns from the probabilistic combination of both the speaker's and the addressee's perspectives. To test our model, we conducted a language production experiment, in which participants had to refer to objects in a (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Two Models of Minimalist, Incremental Syntactic Analysis.Edward P. Stabler - 2013 - Topics in Cognitive Science 5 (3):611-633.
    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Towards a Statistical Model of Grammaticality.Gianluca Giorgolo, Shalom Lappin & Alexander Clark - unknown
    The question of whether it is possible to characterise grammatical knowledge in probabilistic terms is central to determining the relationship of linguistic representation to other cognitive domains. We present a statistical model of grammaticality which maps the probabilities of a statistical model for sentences in parts of the British National Corpus (BNC) into grammaticality scores, using various functions of the parameters of the model. We test this approach with a classifier on test sets containing different levels (...)
     
    Export citation  
     
    Bookmark  
  23.  72
    A Probabilistic Constraints Approach to Language Acquisition and Processing.Mark S. Seidenberg & Maryellen C. MacDonald - 1999 - Cognitive Science 23 (4):569-588.
    This article provides an overview of a probabilistic constraints framework for thinking about language acquisition and processing. The generative approach attempts to characterize knowledge of language (i.e., competence grammar) and then asks how this knowledge is acquired and used. Our approach is performance oriented: the goal is to explain how people comprehend and produce utterances and how children acquire this skill. Use of language involves exploiting multiple probabilistic constraints over various types of linguistic and nonlinguistic (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  24.  57
    Toward a Connectionist Model of Recursion in Human Linguistic Performance.Morten H. Christiansen & Nick Chater - 1999 - Cognitive Science 23 (2):157-205.
    Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative performance profile of the model matches human behavior, both on the relative difficulty of center‐embedding and cross‐dependency, and between the processing of these (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   74 citations  
  25. Probabilistic coherence, logical consistency, and Bayesian learning: Neural language models as epistemic agents.Gregor Betz & Kyle Richardson - 2023 - PLoS ONE 18 (2).
    It is argued that suitably trained neural language models exhibit key properties of epistemic agency: they hold probabilistically coherent and logically consistent degrees of belief, which they can rationally revise in the face of novel evidence. To this purpose, we conduct computational experiments with rankers: T5 models [Raffel et al. 2020] that are pretrained on carefully designed synthetic corpora. Moreover, we introduce a procedure for eliciting a model’s degrees of belief, and define numerical metrics that measure the extent (...)
     
    Export citation  
     
    Bookmark  
  26.  60
    Probabilistic Canonical Models for Partial Logics.François Lepage & Charles Morgan - 2003 - Notre Dame Journal of Formal Logic 44 (3):125-138.
    The aim of the paper is to develop the notion of partial probability distributions as being more realistic models of belief systems than the standard accounts. We formulate the theory of partial probability functions independently of any classical semantic notions. We use the partial probability distributions to develop a formal semantics for partial propositional calculi, with extensions to predicate logic and higher order languages. We give a proof theory for the partial logics and obtain soundness and completeness results.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  27.  31
    (1 other version)Natural Language Grammar Induction using a Constituent-Context Model.Dan Klein & Christopher D. Manning - unknown
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  28.  35
    A Probabilistic Model of Melody Perception.David Temperley - 2008 - Cognitive Science 32 (2):418-444.
    This study presents a probabilistic model of melody perception, which infers the key of a melody and also judges the probability of the melody itself. The model uses Bayesian reasoning: For any “surface” pattern and underlying “structure,” we can infer the structure maximizing P(structure|surface) based on knowledge of P(surface, structure). The probability of the surface can then be calculated as ∑ P(surface, structure), summed over all structures. In this case, the surface is a pattern of notes; the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  29.  19
    Everyday Language Exposure Shapes Prediction of Specific Words in Listening Comprehension: A Visual World Eye-Tracking Study.Aine Ito & Hiromu Sakai - 2021 - Frontiers in Psychology 12.
    We investigated the effects of everyday language exposure on the prediction of orthographic and phonological forms of a highly predictable word during listening comprehension. Native Japanese speakers in Tokyo (Experiment 1) and Berlin (Experiment 2) listened to sentences that contained a predictable word and viewed four objects. The critical object represented the target word (e.g., /sakana/;fish), an orthographic competitor (e.g., /tuno/;horn), a phonological competitor (e.g., /sakura/;cherry blossom), or an unrelated word (e.g., /hon/;book). The three other objects were distractors. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31. A Logic For Inductive Probabilistic Reasoning.Manfred Jaeger - 2005 - Synthese 144 (2):181-248.
    Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  23
    A Probabilistic Model of Meter Perception: Simulating Enculturation.Bastiaan van der Weij, Marcus T. Pearce & Henkjan Honing - 2017 - Frontiers in Psychology 8:238583.
    Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that meter perception is a strategy for increasing the predictability of rhythmic patterns and that the way in which it is shaped by the cultural environment can be understood in terms of probabilistic predictive coding. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  25
    Formal Modelling and Verification of Probabilistic Resource Bounded Agents.Hoang Nga Nguyen & Abdur Rakib - 2023 - Journal of Logic, Language and Information 32 (5):829-859.
    Many problems in Multi-Agent Systems (MASs) research are formulated in terms of the abilities of a coalition of agents. Existing approaches to reasoning about coalitional ability are usually focused on games or transition systems, which are described in terms of states and actions. Such approaches however often neglect a key feature of multi-agent systems, namely that the actions of the agents require resources. In this paper, we describe a logic for reasoning about coalitional ability under resource constraints in the (...) setting. We extend Resource-bounded Alternating-time Temporal Logic (RB-ATL) with probabilistic reasoning and provide a standard algorithm for the model-checking problem of the resulting logic Probabilistic resource-bounded ATL (pRB-ATL). We implement model-checking algorithms and present experimental results using simple multi-agent model-checking problems of increasing complexity. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  60
    Bootstrapping the lexicon: a computational model of infant speech segmentation.Eleanor Olds Batchelder - 2002 - Cognition 83 (2):167-206.
    Prelinguistic infants must find a way to isolate meaningful chunks from the continuous streams of speech that they hear. BootLex, a new model which uses distributional cues to build a lexicon, demonstrates how much can be accomplished using this single source of information. This conceptually simple probabilistic algorithm achieves significant segmentation results on various kinds of language corpora - English, Japanese, and Spanish; child- and adult-directed speech, and written texts; and several variations in coding structure - and (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  35.  31
    The extimate core of understanding: absolute metaphors, psychosis and large language models.Marc Heimann & Anne-Friederike Hübener - forthcoming - AI and Society:1-12.
    This paper delves into the striking parallels between the linguistic patterns of Large Language Models (LLMs) and the concepts of psychosis in Lacanian psychoanalysis. Lacanian theory, with its focus on the formal and logical underpinnings of psychosis, provides a compelling lens to juxtapose human cognition and AI mechanisms. LLMs, such as GPT-4, appear to replicate the intricate metaphorical and metonymical frameworks inherent in human language. Although grounded in mathematical logic and probabilistic analysis, the outputs of LLMs echo (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy.Thomas Icard - 2020 - Journal of Mathematical Psychology 95.
    A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  26
    A probabilistic model of visual working memory: Incorporating higher order regularities into working memory capacity estimates.Timothy F. Brady & Joshua B. Tenenbaum - 2013 - Psychological Review 120 (1):85-109.
  38.  9
    Probabilistic modelling of microtiming perception.Thomas Kaplan, Lorenzo Jamone & Marcus Pearce - 2023 - Cognition 239 (C):105532.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  58
    Representing credal imprecision: from sets of measures to hierarchical Bayesian models.Daniel Lassiter - 2020 - Philosophical Studies 177 (6):1463-1485.
    The basic Bayesian model of credence states, where each individual’s belief state is represented by a single probability measure, has been criticized as psychologically implausible, unable to represent the intuitive distinction between precise and imprecise probabilities, and normatively unjustifiable due to a need to adopt arbitrary, unmotivated priors. These arguments are often used to motivate a model on which imprecise credal states are represented by sets of probability measures. I connect this debate with recent work in Bayesian cognitive (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40.  28
    Limited Evidence of an Association Between Language, Literacy, and Procedural Learning in Typical and Atypical Development: A Meta‐Analysis.Cátia M. Oliveira, Lisa M. Henderson & Marianna E. Hayiou-Thomas - 2023 - Cognitive Science 47 (7):e13310.
    The ability to extract patterns from sensory input across time and space is thought to underlie the development and acquisition of language and literacy skills, particularly the subdomains marked by the learning of probabilistic knowledge. Thus, impairments in procedural learning are hypothesized to underlie neurodevelopmental disorders, such as dyslexia and developmental language disorder. In the present meta‐analysis, comprising 2396 participants from 39 independent studies, the continuous relationship between language, literacy, and procedural learning on the Serial Reaction (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Probabilistic models of cognition: Conceptual foundations.Nick Chater & Alan Yuille - 2006 - Trends in Cognitive Sciences 10 (7):287-291.
    Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   89 citations  
  42.  68
    Connectionist Models of Language Production: Lexical Access and Grammatical Encoding.Gary S. Dell, Franklin Chang & Zenzi M. Griffin - 1999 - Cognitive Science 23 (4):517-542.
    Theories of language production have long been expressed as connectionist models. We outline the issues and challenges that must be addressed by connectionist models of lexical access and grammatical encoding, and review three recent models. The models illustrate the value of an interactive activation approach to lexical access in production, the need for sequential output in both phonological and grammatical encoding, and the potential for accounting for structural effects on errors and structural priming from learning.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  43.  33
    A Probabilistic Model of Spin and Spin Measurements.Arend Niehaus - 2016 - Foundations of Physics 46 (1):3-13.
    Several theoretical publications on the Dirac equation published during the last decades have shown that, an interpretation is possible, which ascribes the origin of electron spin and magnetic moment to an autonomous circular motion of the point-like charged particle around a fixed centre. In more recent publications an extension of the original so called “Zitterbewegung Interpretation” of quantum mechanics was suggested, in which the spin results from an average of instantaneous spin vectors over a Zitterbewegung period. We argue that, the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44.  49
    A probabilistic model of theory formation.Charles Kemp, Joshua B. Tenenbaum, Sourabh Niyogi & Thomas L. Griffiths - 2010 - Cognition 114 (2):165-196.
  45.  78
    Probabilistic models of cognition: where next?Nick Chater, Joshua B. Tenenbaum & Alan Yuille - 2006 - Trends in Cognitive Sciences 10 (7):292-293.
  46.  36
    Formal models of language learning.Steven Pinker - 1979 - Cognition 7 (3):217-283.
  47.  37
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48.  34
    Vagueness and Rationality in Language Use and Cognition.Richard Dietz (ed.) - 2019 - Springer Verlag.
    This volume presents new conceptual and experimental studies which investigate the connection between vagueness and rationality from various systematic directions, such as philosophy, linguistics, cognitive psychology, computing science, and economics. Vagueness in language use and cognition has traditionally been interpreted in epistemic or semantic terms. The standard view of vagueness specifically suggests that considerations of agency or rationality, broadly conceived, can be left out of the equation. Most recently, new literature on vagueness has been released which suggests that the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  38
    A probabilistic model of cross-categorization.Patrick Shafto, Charles Kemp, Vikash Mansinghka & Joshua B. Tenenbaum - 2011 - Cognition 120 (1):1-25.
  50.  9
    Probabilistic modelling of general noisy multi-manifold data sets.M. Canducci, P. Tiño & M. Mastropietro - 2022 - Artificial Intelligence 302 (C):103579.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 969