Results for 'Unsupervised parsing'

660 found
Order:
  1.  19
    Unsupervised and few-shot parsing from pretrained language models.Zhiyuan Zeng & Deyi Xiong - 2022 - Artificial Intelligence 305 (C):103665.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  38
    A Generative Constituent-Context Model for Improved Grammar Induction.Dan Klein & Christopher D. Manning - unknown
    We present a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. Parameter search with EM produces higher quality analyses than previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher F1 of 71% on nontrivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Making Sense of Raw Input.Richard Evans, Matko Bošnjak, Lars Buesing, Kevin Ellis, David Pfau, Pushmeet Kohli & Marek Sergot - 2021 - Artificial Intelligence 299 (C):103521.
    How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task [1]. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure. However, the original formulation of the apperception task had one fundamental limitation: it (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  47
    From Exemplar to Grammar: A Probabilistic Analogy‐Based Model of Language Learning.Rens Bod - 2009 - Cognitive Science 33 (5):752-793.
    While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase‐structure trees should be assigned to initial sentences, s/he allows (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  5. (1 other version)Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  6.  20
    Morphological Tagging and Lemmatization in the Albanian Language.Elissa Mollakuqe, Mentor Hamiti & Diellza Nagavci Mati - 2021 - Seeu Review 16 (2):3-16.
    An important element of Natural Language Processing is parts of speech tagging. With fine-grained word-class annotations, the word forms in a text can be enhanced and can also be used in downstream processes, such as dependency parsing. The improved search options that tagged data offers also greatly benefit linguists and lexicographers. Natural language processing research is becoming increasingly popular and important as unsupervised learning methods are developed. There are some aspects of the Albanian language that make the creation (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  8
    Honestidad y Otras Características Deseables Para El Desarrollo Tecnológico.Martín Parselis - 2018 - SCIO Revista de Filosofía 15:177-212.
    La literatura sobre las relaciones entre las personas y la técnica se refiere a análisis heredados de la economía, la sociología, la política, etc. Estas miradas abarcan a la técnica de forma global dificultando el estudio de algunos procesos que ocurren entre cada persona y la técnica. En este sentido, buscaremos explicitar estas relaciones a través de los productos de la técnica como mediadores sociales. Con esta base tienen sentido los criterios de las tecnologías entrañables para que esta mediación se (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. ¿Puede el arte ser indefinido?: controversias sobre la definición del arte en la estética contemporánea y la propuesta de Arthur C. Danto.Verónica Parselis - 2008 - Sapientia 63 (223):143-158.
    No categories
     
    Export citation  
     
    Bookmark  
  9.  50
    Mercenaries in Hellenistic Times G. T. Griffith : The Mercenaries of the Hellenistic World. Pp. x + 340. Cambridge: University Press, 1935. Cloth, 16s. [REVIEW]H. W. Parse - 1935 - The Classical Review 49 (04):136-.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  24
    Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code.Andrea Tagarelli & Andrea Simeri - 2022 - Artificial Intelligence and Law 30 (3):417-473.
    Modeling law search and retrieval as prediction problems has recently emerged as a predominant approach in law intelligence. Focusing on the law article retrieval task, we present a deep learning framework named LamBERTa, which is designed for civil-law codes, and specifically trained on the Italian civil code. To our knowledge, this is the first study proposing an advanced approach to law article prediction for the Italian legal system based on a BERT (Bidirectional Encoder Representations from Transformers) learning framework, which has (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  11.  31
    Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation.Geoffrey Hinton, Simon Osindero, Max Welling & Yee-Whye Teh - 2006 - Cognitive Science 30 (4):725-731.
    We describe a way of modeling high‐dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron‐like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of each unit depends on the activities in earlier layers are learned by minimizing the energy assigned to data vectors that are actually observed and maximizing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  52
    Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media.Geoffrey C. Bowker & Anja Bechmann - 2019 - Big Data and Society 6 (1).
    Artificial Intelligence in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding such seemingly (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  13.  51
    Unsupervised context sensitive language acquisition from a large corpus.Shimon Edelman - unknown
    We describe a pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of linguistic structures from a plain natural-language corpus. This paper addresses the issues of learning structured knowledge from a large-scale natural language data set, and of generalization to unseen text. The implemented algorithm represents sentences as paths on a graph whose vertices are words. Significant patterns, determined by recursive context-sensitive statistical inference, form new vertices. Linguistic constructions are represented by trees composed of significant patterns (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  14.  47
    On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  15. Parsing pictures: on analyzing the content of images in science.Letitia Meynell - 2013 - The Knowledge Engineering Review 28 (3): 327-345.
    In this paper I tackle the question of what basic form an analytical method for articulating and ultimately assessing visual representations should take. I start from the assumption that scientific images, being less prone to interpretive complication than artworks, are ideal objects from which to engage this question. I then assess a recent application of Nelson Goodman's aesthetics to the project of parsing scientific images, Laura Perini's ‘The truth in pictures’. I argue that, although her project is an important (...)
     
    Export citation  
     
    Bookmark  
  16.  80
    Unsupervised statistical learning in vision: computational principles, biological evidence.Shimon Edelman - unknown
    Unsupervised statistical learning is the standard setting for the development of the only advanced visual system that is both highly sophisticated and versatile, and extensively studied: that of monkeys and humans. In this extended abstract, we invoke philosophical observations, computational arguments, behavioral data and neurobiological findings to explain why computer vision researchers should care about (1) unsupervised learning, (2) statistical inference, and (3) the visual brain. We then outline a neuromorphic approach to structural primitive learning motivated by these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  23
    Comparing decoding mechanisms for parsing argumentative structures.Stergos Afantenos, Andreas Peldszus & Manfred Stede - 2018 - Argument and Computation 9 (3):177-192.
    Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic struc...
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18.  72
    Unsupervised Efficient Learning and Representation of Language Structure.Shimon Edelman - unknown
    We describe a linguistic pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of corpus data. This is achieved by compactly coding recursively structured constituent patterns, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. The resulting representations constitute an efficient encoding of linguistic knowledge and support systematic generalization to unseen sentences.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19. Unsupervised learning of visual structure.Shimon Edelman - unknown
    To learn a visual code in an unsupervised manner, one may attempt to capture those features of the stimulus set that would contribute significantly to a statistically efficient representation. Paradoxically, all the candidate features in this approach need to be known before statistics over them can be computed. This paradox may be circumvented by confining the repertoire of candidate features to actual scene fragments, which resemble the “what+where” receptive fields found in the ventral visual stream in primates. We describe (...)
     
    Export citation  
     
    Bookmark   4 citations  
  20.  31
    Unsupervised network traffic anomaly detection with deep autoencoders.Vibekananda Dutta, Marek Pawlicki, Rafał Kozik & Michał Choraś - 2022 - Logic Journal of the IGPL 30 (6):912-925.
    Contemporary Artificial Intelligence methods, especially their subset-deep learning, are finding their way to successful implementations in the detection and classification of intrusions at the network level. This paper presents an intrusion detection mechanism that leverages Deep AutoEncoder and several Deep Decoders for unsupervised classification. This work incorporates multiple network topology setups for comparative studies. The efficiency of the proposed topologies is validated on two established benchmark datasets: UNSW-NB15 and NetML-2020. The results of their analysis are discussed in terms of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Deductive parsing in Haskell.Jan van Eijck - unknown
    This paper contains the full code of an implementation in Haskell [2], in ‘literate programming’ style [3], of an approach to deductive parsing based on [4]. We focus on the case of the Earley [1] parsing algorithm for CF languages.
     
    Export citation  
     
    Bookmark  
  22. Deductive parsing with sequentially indexed grammars.Jan van Eijck - unknown
    This paper extends the Earley parsing algorithm for context free languages [3] to the case of sequentially indexed languages. Sequentially indexed languages are related to indexed languages [1, 2]. The difference is that parallel processing of index stacks is replaced by sequential processing [4].
     
    Export citation  
     
    Bookmark  
  23. Parsing the amplitudes.Rom Harré - 1988 - In Harvey R. Brown & Rom Harré (eds.), Philosophical foundations of quantum field theory. New York: Oxford University Press. pp. 59--71.
    No categories
     
    Export citation  
     
    Bookmark   9 citations  
  24. Parsing and Presupposition in the Calculation of Local Contexts.Matthew Mandelkern & Jacopo Romoli - forthcoming - Semantics and Pragmatics.
    In this paper, we use antecedent-final conditionals to formulate two problems for parsing-based theories of presupposition projection and triviality of the kind given in Schlenker 2009. We show that, when it comes to antecedent-final conditionals, parsing-based theories predict filtering of presuppositions where there is in fact projection, and triviality judgments for sentences which are in fact felicitous. More concretely, these theories predict that presuppositions triggered in the antecedent of antecedent-final conditionals will be filtered (i.e. will not project) if (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  25.  66
    Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives.Rohan Nanda, Giovanni Siragusa, Luigi Di Caro, Guido Boella, Lorenzo Grossio, Marco Gerbaudo & Francesco Costamagna - 2019 - Artificial Intelligence and Law 27 (2):199-225.
    The automated identification of national implementations of European directives by text similarity techniques has shown promising preliminary results. Previous works have proposed and utilized unsupervised lexical and semantic similarity techniques based on vector space models, latent semantic analysis and topic models. However, these techniques were evaluated on a small multilingual corpus of directives and NIMs. In this paper, we utilize word and paragraph embedding models learned by shallow neural networks from a multilingual legal corpus of European directives and national (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  17
    Parsing as a Cue-Based Retrieval Model.Jakub Dotlačil - 2021 - Cognitive Science 45 (8):e13020.
    This paper develops a novel psycholinguistic parser and tests it against experimental and corpus reading data. The parser builds on the recent research into memory structures, which argues that memory retrieval is content‐addressable and cue‐based. It is shown that the theory of cue‐based memory systems can be combined with transition‐based parsing to produce a parser that, when combined with the cognitive architecture ACT‐R, can model reading and predict online behavioral measures (reading times and regressions). The parser's modeling capacities are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  70
    A parsing method for Montague grammars.Joyce Friedman & David S. Warren - 1978 - Linguistics and Philosophy 2 (3):347 - 372.
    The main result in this paper is a method for obtaining derivation trees from sentences of certain formal grammars. No parsing algorithm was previously known to exist for these grammars.Applied to Montague's PTQ the method produces all parses that could correspond to different meanings. The technique directly addresses scope and reference and provides a framework for examining these phenomena. The solution for PTQ is implemented in an efficient and useful computer program.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28.  15
    Parse Selection on the Redwoods Corpus: 3rd Growth Results.Christopher D. Manning & Kristina Toutanova - unknown
    This report details experimental results of using stochastic disambiguation models for parsing sentences from the Redwoods treebank (Oepen et al., 2002). The goals of this paper are two-fold: (i) to report accuracy results on the more highly ambiguous latest version of the treebank, as compared to already published results achieved by the same stochastic models on a previous version of the corpus, and (ii) to present some newly developed models using features from the HPSG signs, as well as the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  25
    Parsing with Treebank Grammars: Empirical Bounds, Theoretical Models, and the Structure of the Penn Treebank.Dan Klein & Christopher D. Manning - unknown
    This paper presents empirical studies and closely corresponding theoretical models of the performance of a chart parser exhaustively parsing the Penn Treebank with the Treebank’s own CFG grammar. We show how performance is dramatically affected by rule representation and tree transformations, but little by top-down vs. bottom-up strategies. We discuss grammatical saturation, including analysis of the strongly connected components of the phrasal nonterminals in the Treebank, and model how, as sentence length increases, the effective grammar rule size increases as (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  72
    Unsupervised Decoding of Long-Term, Naturalistic Human Neural Recordings with Automated Video and Audio Annotations.Nancy X. R. Wang, Jared D. Olson, Jeffrey G. Ojemann, Rajesh P. N. Rao & Bingni W. Brunton - 2016 - Frontiers in Human Neuroscience 10.
  31.  64
    A simplicity principle in unsupervised human categorization.Emmanuel M. Pothos & Nick Chater - 2002 - Cognitive Science 26 (3):303-343.
    We address the problem of predicting how people will spontaneously divide into groups a set of novel items. This is a process akin to perceptual organization. We therefore employ the simplicity principle from perceptual organization to propose a simplicity model of unconstrained spontaneous grouping. The simplicity model predicts that people would prefer the categories for a set of novel items that provide the simplest encoding of these items. Classification predictions are derived from the model without information either about the number (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  32.  23
    A∗ parsing: Fast exact viterbi parse selection.Dan Klein & Christopher D. Manning - unknown
    A* PCFG parsing can dramatically reduce the time required to find the exact Viterbi parse by conservatively estimating outside Viterbi probabilities. We discuss various estimates and give efficient algorithms for computing them. On Penn treebank sentences, our most detailed estimate reduces the total number of edges processed to less than 3% of that required by exhaustive parsing, and even a simpler estimate which can be pre-computed in under a minute still reduces the work by a factor of 5. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  42
    Unsupervised discovery of a statistical verb lexicon.Christopher Manning - manuscript
    tic structure. Determining the semantic roles of a verb’s dependents is an important step in natural..
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Supervised, Unsupervised and Reinforcement Learning-Face Recognition Using Null Space-Based Local Discriminant Embedding.Yanmin Niu & Xuchu Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes In Computer Science. Springer Verlag. pp. 4114--245.
     
    Export citation  
     
    Bookmark  
  35. Parsing: overview.Florian Wolf & Edward Gibson - 2003 - In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group.
     
    Export citation  
     
    Bookmark  
  36.  25
    (1 other version)Parsing and Hypergraphs.Dan Klein & Christopher D. Manning - unknown
    While symbolic parsers can be viewed as deduction systems, this view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an Ç´Ò¿µ time bound for arbitrary PCFGs, while preserving as much of the flexibility of symbolic chart parsers as allowed by (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  15
    Interpreting Rhythm as Parsing: Syntactic‐Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms.Gabriele Cecchetti, Cédric A. Tomasini, Steffen A. Herff & Martin A. Rohrmeier - 2023 - Cognitive Science 47 (12):e13389.
    Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment‐by‐moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music‐theoretically motivated (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  40
    (1 other version)Accurate Unlexicalized Parsing.Dan Klein & Christopher D. Manning - unknown
    We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   25 citations  
  39.  73
    Hybrid Unsupervised Exploratory Plots: A Case Study of Analysing Foreign Direct Investment.Álvaro Herrero, Alfredo Jiménez & Secil Bayraktar - 2019 - Complexity 2019:1-14.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  76
    Unsupervised approaches for measuring textual similarity between legal court case reports.Arpan Mandal, Kripabandhu Ghosh, Saptarshi Ghosh & Sekhar Mandal - 2021 - Artificial Intelligence and Law 29 (3):417-451.
    In the domain of legal information retrieval, an important challenge is to compute similarity between two legal documents. Precedents play an important role in The Common Law system, where lawyers need to frequently refer to relevant prior cases. Measuring document similarity is one of the most crucial aspects of any document retrieval system which decides the speed, scalability and accuracy of the system. Text-based and network-based methods for computing similarity among case reports have already been proposed in prior works but (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Unsupervised learning and grammar induction.Alex Clark & Shalom Lappin - unknown
    In this chapter we consider unsupervised learning from two perspectives. First, we briefly look at its advantages and disadvantages as an engineering technique applied to large corpora in natural language processing. While supervised learning generally achieves greater accuracy with less data, unsupervised learning offers significant savings in the intensive labour required for annotating text. Second, we discuss the possible relevance of unsupervised learning to debates on the cognitive basis of human language acquisition. In this context we explore (...)
     
    Export citation  
     
    Bookmark  
  42. Penn parsed corpora of historical English.Anthony Kroch - unknown
    The University of Pennsylvania Linguistics Department is home of a long-running project to create syntactically annotated (parsed) corpora of historical English. The project is directed by Anthony Kroch, Professor of Linguistics, and the research associate in charge of corpus annotation is Dr. Beatrice Santorini. The Middle English corpus was constructed by Dr. Ann Taylor, now research associate in charge of corpus annotation at the University of York, England.
     
    Export citation  
     
    Bookmark  
  43.  23
    When unsupervised training benefits category learning.Franziska Bröker, Bradley C. Love & Peter Dayan - 2022 - Cognition 221 (C):104984.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44.  45
    Direct parsing of ID/LP grammars.Stuart M. Shieber - 1984 - Linguistics and Philosophy 7 (2):135 - 154.
  45. Parsing ‘if’-sentences.V. H. Dudman - 1984 - Analysis 44 (4):145-153.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  46.  11
    Is Unsupervised Clustering Somehow Truer?Anders Søgaard - 2024 - Minds and Machines 34 (4).
    Scientists increasingly approach the world through machine learning techniques, but philosophers of science often question their epistemic status. Some philosophers have argued that the use of unsupervised clustering algorithms is more justified than the use of supervised classification, because supervised classification is more biased, and because (parametric) simplicity plays a different and more interesting role in unsupervised clustering. I call these arguments the No-Bias Argument and the Simplicity-Truth Argument. I show how both arguments are fallacious and how, on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Parsing and comprehending with word experts (a theory and its realization).Steven Small & Chuck Rieger - 1982 - In Wendy G. Lehnert & Martin Ringle (eds.), Strategies for Natural Language Processing. Lawrence Erlbaum. pp. 89--147.
    No categories
     
    Export citation  
     
    Bookmark   2 citations  
  48.  10
    An Unsupervised Natural Clustering with Optimal Conceptual Affinity.G. Barker - 2010 - Journal of Intelligent Systems 19 (3):289-300.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  58
    Some Tests of an Unsupervised Model of Language Acquisition.Shimon Edelman - unknown
    We outline an unsupervised language acquisition algorithm and offer some psycholinguistic support for a model based on it. Our approach resembles the Construction Grammar in its general philosophy, and the Tree Adjoining Grammar in its computational characteristics. The model is trained on a corpus of transcribed child-directed speech (CHILDES). The model’s ability to process novel inputs makes it capable of taking various standard tests of English that rely on forced-choice judgment and on magnitude estimation of linguistic acceptability. We report (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  10
    Unsupervised clustering of context data and learning user requirements for a mobile device.John A. Flanagan - 2001 - In P. Bouquet V. Akman (ed.), Modeling and Using Context. Springer. pp. 155--168.
1 — 50 / 660