Results for 'Automatic speech recognition'

966 found
Order:
  1.  35
    Modelling asynchrony in automatic speech recognition using loosely coupled hidden Markov models.H. J. Nock & S. J. Young - 2002 - Cognitive Science 26 (3):283-301.
    Hidden Markov models (HMMs) have been successful for modelling the dynamics of carefully dictated speech, but their performance degrades severely when used to model conversational speech. Since speech is produced by a system of loosely coupled articulators, stochastic models explicitly representing this parallelism may have advantages for automatic speech recognition (ASR), particularly when trying to model the phonological effects inherent in casual spontaneous speech. This paper presents a preliminary feasibility study of one such (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2.  29
    Automatic Speech Recognition: A Comprehensive Survey.Arbana Kadriu & Amarildo Rista - 2020 - Seeu Review 15 (2):86-112.
    Speech recognition is an interdisciplinary subfield of natural language processing (NLP) that facilitates the recognition and translation of spoken language into text by machine. Speech recognition plays an important role in digital transformation. It is widely used in different areas such as education, industry, and healthcare and has recently been used in many Internet of Things and Machine Learning applications. The process of speech recognition is one of the most difficult processes in computer (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  18
    A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition.Rajesh Kumar Aggarwal & Vishal Passricha - 2019 - Journal of Intelligent Systems 29 (1):1261-1274.
    Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  12
    Discriminatively trained continuous Hindi speech recognition using integrated acoustic features and recurrent neural network language modeling.R. K. Aggarwal & A. Kumar - 2020 - Journal of Intelligent Systems 30 (1):165-179.
    This paper implements the continuous Hindi Automatic Speech Recognition (ASR) system using the proposed integrated features vector with Recurrent Neural Network (RNN) based Language Modeling (LM). The proposed system also implements the speaker adaptation using Maximum-Likelihood Linear Regression (MLLR) and Constrained Maximum likelihood Linear Regression (C-MLLR). This system is discriminatively trained by Maximum Mutual Information (MMI) and Minimum Phone Error (MPE) techniques with 256 Gaussian mixture per Hidden Markov Model(HMM) state. The training of the baseline system has (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  40
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6.  25
    How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis ten Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  7.  49
    How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  8.  15
    Multitask Learning with Local Attention for Tibetan Speech Recognition.Hui Wang, Fei Gao, Yue Zhao, Li Yang, Jianjian Yue & Huilin Ma - 2020 - Complexity 2020:1-10.
    In this paper, we propose to incorporate the local attention in WaveNet-CTC to improve the performance of Tibetan speech recognition in multitask learning. With an increase in task number, such as simultaneous Tibetan speech content recognition, dialect identification, and speaker recognition, the accuracy rate of a single WaveNet-CTC decreases on speech recognition. Inspired by the attention mechanism, we introduce the local attention to automatically tune the weights of feature frames in a window and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  76
    Automatic phonetic segmentation of Hindi speech using hidden Markov model.Archana Balyan, S. S. Agrawal & Amita Dev - 2012 - AI and Society 27 (4):543-549.
    In this paper, we study the performance of baseline hidden Markov model (HMM) for segmentation of speech signals. It is applied on single-speaker segmentation task, using Hindi speech database. The automatic phoneme segmentation framework evolved imitates the human phoneme segmentation process. A set of 44 Hindi phonemes were chosen for the segmentation experiment, wherein we used continuous density hidden Markov model (CDHMM) with a mixture of Gaussian distribution. The left-to-right topology with no skip states has been selected (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10.  83
    Computational Validation of the Motor Contribution to Speech Perception.Leonardo Badino, Alessandro D'Ausilio, Luciano Fadiga & Giorgio Metta - 2014 - Topics in Cognitive Science 6 (3):461-475.
    Action perception and recognition are core abilities fundamental for human social interaction. A parieto-frontal network (the mirror neuron system) matches visually presented biological motion information onto observers' motor representations. This process of matching the actions of others onto our own sensorimotor repertoire is thought to be important for action recognition, providing a non-mediated “motor perception” based on a bidirectional flow of information along the mirror parieto-frontal circuits. State-of-the-art machine learning strategies for hand action identification have shown better performances (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  12
    A procedure for adaptive control of the interaction between acoustic classification and linguistic decoding in automatic recognition of continuous speech.C. C. Tappert & N. R. Dixon - 1974 - Artificial Intelligence 5 (2):95-113.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  18
    Improving the Assessment of Mild Cognitive Impairment in Advanced Age With a Novel Multi-Feature Automated Speech and Language Analysis of Verbal Fluency.Liu Chen, Meysam Asgari, Robert Gale, Katherine Wild, Hiroko Dodge & Jeffrey Kaye - 2020 - Frontiers in Psychology 11:494917.
    _Introduction:_ Clinically relevant information can go uncaptured in the conventional scoring of a verbal fluency test. We hypothesize that characterizing the temporal aspects of the response through a set of time related measures will be useful in distinguishing those with MCI from cognitively intact controls. _Methods:_ Audio recordings of an animal fluency test administered to 70 demographically matched older adults (mean age 90.4 years), 28 with mild cognitive impairment (MCI) and 42 cognitively intact (CI) were professionally transcribed and fed into (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  38
    Speech trasformations solutions.Dimitri Kanevsky, Sara Basson, Alexander Faisman, Leonid Rachevsky, Alex Zlatsin & Sarah Conrod - 2006 - Pragmatics and Cognition 14 (2):411-442.
    This paper outlines the background development of “intelligent“ technologies such as speech recognition. Despite significant progress in the development of these technologies, they still fall short in many areas, and rapid advances in areas such as dictation are actually stalled. In this paper we have proposed semi-automatic solutions — smart integration of human and intelligent efforts. One such technique involves improvement to the speech recognition editing interface, thereby reducing the perception of errors to the viewer. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  14.  35
    The Recognition of Phonologically Assimilated Words Does Not Depend on Specific Language Experience.Holger Mitterer, Valéria Csépe, Ferenc Honbolygo & Leo Blomert - 2006 - Cognitive Science 30 (3):451-479.
    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→ [leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15.  21
    Computational Models of Miscommunication Phenomena.Matthew Purver, Julian Hough & Christine Howes - 2018 - Topics in Cognitive Science 10 (2):425-451.
    Miscommunication phenomena such as repair in dialogue are important indicators of the quality of communication. Automatic detection is therefore a key step toward tools that can characterize communication quality and thus help in applications from call center management to mental health monitoring. However, most existing computational linguistic approaches to these phenomena are unsuitable for general use in this way, and particularly for analyzing human–human dialogue: Although models of other-repair are common in human-computer dialogue systems, they tend to focus on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Merging information in speech recognition: Feedback is never necessary.Dennis Norris, James M. McQueen & Anne Cutler - 2000 - Behavioral and Brain Sciences 23 (3):299-325.
    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   52 citations  
  17. Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought on (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. Speech recognition technology.F. Beaufays, H. Bourlard, Horacio Franco & Nelson Morgan - 2002 - In Michael A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press.
     
    Export citation  
     
    Bookmark  
  19.  14
    Restricted Speech Recognition in Noise and Quality of Life of Hearing-Impaired Children and Adolescents With Cochlear Implants – Need for Studies Addressing This Topic With Valid Pediatric Quality of Life Instruments.Maria Huber & Clara Havas - 2019 - Frontiers in Psychology 10.
    Cochlear implants (CI) support the development of oral language in hearing-impaired children. However, even with CI, speech recognition in noise (SRiN) is limited. This raised the question, whether these restrictions are related to the quality of life (QoL) of children and adolescents with CI and how SRiN and QoL are related to each other. As a result of a systematic literature research only three studies were found, indicating positive moderating effects between SRiN and QoL of young CI users. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  50
    DLD: An Optimized Chinese Speech Recognition Model Based on Deep Learning.Hong Lei, Yue Xiao, Yanchun Liang, Dalin Li & Heow Pueh Lee - 2022 - Complexity 2022:1-8.
    Speech recognition technology has played an indispensable role in realizing human-computer intelligent interaction. However, most of the current Chinese speech recognition systems are provided online or offline models with low accuracy and poor performance. To improve the performance of offline Chinese speech recognition, we propose a hybrid acoustic model of deep convolutional neural network, long short-term memory, and deep neural network. This model utilizes DCNN to reduce frequency variation and adds a batch normalization layer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  63
    Policing based on automatic facial recognition.Zhilong Guo & Lewis Kennedy - 2023 - Artificial Intelligence and Law 31 (2):397-443.
    Advances in technology have transformed and expanded the ways in which policing is run. One new manifestation is the mass acquisition and processing of private facial images via automatic facial recognition by the police: what we conceptualise as AFR-based policing. However, there is still a lack of clarity on the manner and extent to which this largely-unregulated technology is used by law enforcement agencies and on its impact on fundamental rights. Social understanding and involvement are still insufficient in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  23
    Masked Speech Recognition in School-Age Children.Lori J. Leibold & Emily Buss - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  14
    Longitudinal Speech Recognition in Noise in Children: Effects of Hearing Status and Vocabulary.Elizabeth A. Walker, Caitlin Sapp, Jacob J. Oleson & Ryan W. McCreery - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  64
    Effects of Semantic Context and Fundamental Frequency Contours on Mandarin Speech Recognition by Second Language Learners.Linjun Zhang, Yu Li, Han Wu, Xin Li, Hua Shu, Yang Zhang & Ping Li - 2016 - Frontiers in Psychology 7:189783.
    Speech recognition by second language (L2) learners in optimal and suboptimal conditions has been examined extensively with English as the target language in most previous studies. This study extended existing experimental protocols ( Wang et al., 2013 ) to investigate Mandarin speech recognition by Japanese learners of Mandarin at two different levels (elementary vs. intermediate) of proficiency. The overall results showed that in addition to L2 proficiency, semantic context, F0 contours, and listening condition all affected the (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  46
    Merging information versus speech recognition.Irene Appelbaum - 2000 - Behavioral and Brain Sciences 23 (3):325-326.
    Norris, McQueen & Cutler claim that all known speech recognition data can be accounted for with their autonomous model, “Merge.” But this claim is doubly misleading. (1) Although speech recognition is autonomous in their view, the Merge model is not. (2) The body of data which the Merge model accounts for, is not, in their view, speech recognition data. Footnotes1 Author is also affiliated with the Center for the Study of Language and Information, Stanford (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. Audio-visual speech recognition.G. Potamianos & J. Luettin - 2005 - In Keith Brown (ed.), Encyclopedia of Language and Linguistics. Elsevier.
  27. Speech recognition: Statistical methods.L. R. Rabiner & B. H. Juang - 2005 - In Keith Brown (ed.), Encyclopedia of Language and Linguistics. Elsevier. pp. 1--18.
     
    Export citation  
     
    Bookmark  
  28. Techno-Telepathy & Silent Subvocal Speech-Recognition Robotics.Virgil W. Brower - 2021 - HORIZON. Studies in Phenomenology 10 (1):232-257.
    The primary focus of this project is the silent and subvocal speech-recognition interface unveiled in 2018 as an ambulatory device wearable on the neck that detects a myoelectrical signature by electrodes worn on the surface of the face, throat, and neck. These emerge from an alleged “intending to speak” by the wearer silently-saying-something-to-oneself. This inner voice is believed to occur while one reads in silence or mentally talks to oneself. The artifice does not require spoken sounds, opening the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  15
    Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.Rebecca Carroll, Anna Warzybok, Birger Kollmeier & Esther Ruigendijk - 2016 - Frontiers in Psychology 7:170619.
    Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  32
    Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise.Göran B. W. Söderlund & Elisabeth Nilsson Jobs - 2016 - Frontiers in Psychology 7.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  46
    Perceptual units in speech recognition.Dominic W. Massaro - 1974 - Journal of Experimental Psychology 102 (2):199.
  32.  22
    Single-Channel Speech Enhancement Techniques for Distant Speech Recognition.Ramaswamy Kumaraswamy & Jaya Kumar Ashwini - 2013 - Journal of Intelligent Systems 22 (2):81-93.
    This article presents an overview of the single-channel dereverberation methods suitable for distant speech recognition application. The dereverberation methods are mainly classified based on the domain of enhancement of speech signal captured by a distant microphone. Many single-channel speech enhancement methods focus on either denoising or dereverberating the distorted speech signal. There are very few methods that consider both noise and reverberation effects. Such methods are discussed under a multistage approach in this article. The article (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  13
    On Dynamic Pitch Benefit for Speech Recognition in Speech Masker.Jing Shen & Pamela E. Souza - 2018 - Frontiers in Psychology 9.
    Previous work demonstrated that dynamic pitch (i.e., pitch variation in speech) aids speech recognition in various types of noises. While this finding suggests dynamic pitch enhancement in target speech can benefit speech recognition in noise, it is of importance to know what noise characteristics affect dynamic pitch benefit and who will benefit from enhanced dynamic pitch cues. Following our recent finding that temporal modulation in noise influences dynamic pitch benefit, we examined the effect of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  13
    English Phrase Speech Recognition Based on Continuous Speech Recognition Algorithm and Word Tree Constraints.Haifan Du & Haiwen Duan - 2021 - Complexity 2021:1-11.
    This paper combines domestic and international research results to analyze and study the difference between the attribute features of English phrase speech and noise to enhance the short-time energy, which is used to improve the threshold judgment sensitivity; noise addition to the discrepancy data set is used to enhance the recognition robustness. The backpropagation algorithm is improved to constrain the range of weight variation, avoid oscillation phenomenon, and shorten the training time. In the real English phrase sound (...) system, there are problems such as massive training data and low training efficiency caused by the super large-scale model parameters of the convolutional neural network. To address these problems, the NWBP algorithm is based on the oscillation phenomenon that tends to occur when searching for the minimum error value in the late training period of the network parameters, using the K-MEANS algorithm to obtain the seed nodes that approach the minimal error value, and using the boundary value rule to reduce the range of weight change to reduce the oscillation phenomenon so that the network error converges as soon as possible and improve the training efficiency. Through simulation experiments, the NWBP algorithm improves the degree of fitting and convergence speed in the training of complex convolutional neural networks compared with other algorithms, reduces the redundant computation, and shortens the training time to a certain extent, and the algorithm has the advantage of accelerating the convergence of the network compared with simple networks. The word tree constraint and its efficient storage structure are introduced, which improves the storage efficiency of the word tree constraint and the retrieval efficiency in the English phrase recognition search. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  29
    A salience-driven approach to speech recognition for human-robot interaction.Pierre Lison - 2010 - In T. Icard & R. Muskens (eds.), Interfaces: Explorations in Logic, Language and Computation. Springer Berlin. pp. 102--113.
  36.  61
    Mandarin-Speaking Children’s Speech Recognition: Developmental Changes in the Influences of Semantic Context and F0 Contours.Zhou Hong, Li Yu, Liang Meng, Guan Connie Qun, Zhang Linjun, Shu Hua & Zhang Yang - 2017 - Frontiers in Psychology 8.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  13
    Vlsi architecture design for bnn speech recognition.Jia-Ching Wang, Jhing-Fa Wang & Fan-Min Li - 2003 - Signal Processing, Pattern Recognition, and Applications.
    Direct download  
     
    Export citation  
     
    Bookmark  
  38. Shortlist: a connectionist model of continuous speech recognition.Dennis Norris - 1994 - Cognition 52 (3):189-234.
  39.  55
    Temporal cortex activation during speech recognition: an optical topography study.Hiroki Sato, Tatsuya Takeuchi & Kuniyoshi L. Sakai - 1999 - Cognition 73 (3):B55-B66.
  40.  3
    Why might there be lexical-prelexical feedback in speech recognition?Dennis Norris & James M. McQueen - 2025 - Cognition 255 (C):106025.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Social, Cognitive, and Neural Constraints on Subjectivity and Agency: Implications for Dissociative Identity Disorder.Peter Q. Deeley - 2003 - Philosophy, Psychiatry, and Psychology 10 (2):161-167.
    In lieu of an abstract, here is a brief excerpt of the content:Philosophy, Psychiatry, & Psychology 10.2 (2003) 161-167 [Access article in PDF] Social, Cognitive, and Neural Constraints on Subjectivity and Agency:Implications for Dissociative Identity Disorder Peter Q. Deeley In this commentary, I consider Matthew's argument after making some general observations about dissociative identity disorder (DID). In contrast to Matthew's statement that "cases of DID, although not science fiction, are extraordinary" (p. 148), I believe that there are natural analogs of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42.  8
    Automatic recognition method of installation errors of metallurgical machinery parts based on neural network.Bo Zhan & Hailong Cui - 2022 - Journal of Intelligent Systems 31 (1):321-331.
    The installation error of metallurgical machinery parts is one of the common sources of errors in mechanical equipment. Because the installation error of different parts has different influences on different mechanical equipment, a simple linear formula cannot be used to identify the installation error. In the past, the manual recognition method and the touch recognition method lack of error information analysis, which leads to inaccurate recognition results. To improve the problem, an automatic recognition method based (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  43. Body gesture and facial expression analysis for automatic affect recognition. Castellano, G., Caridakis, Camurri, A., Karpouzis, K., Volpe, Kollias & S. - 2010 - In Klaus R. Scherer, Tanja Bänziger & Etienne Roesch (eds.), A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford University Press.
  44.  73
    Shortlist B: A Bayesian model of continuous speech recognition.Dennis Norris & James M. McQueen - 2008 - Psychological Review 115 (2):357-395.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   75 citations  
  45. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition.Dan Jurafsky & James H. Martin - 2000 - Prentice-Hall.
    The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this book takes an empirical approach to the ...
    Direct download  
     
    Export citation  
     
    Bookmark   33 citations  
  46.  66
    (1 other version)A recognition-sensitive phenomenology of hate speech.Suzanne Whitten - 2018 - Critical Review of International Social and Political Philosophy 23 (7):1-21.
    One particularly prominent strand of hate speech theory conceptualizes the harm in hate speech by considering the immediate illocutionary force of a hate speech ‘act’. What appears to be missing from such a conception, however, is how recognition relations and normative expectations present in a speech situation influence the harm such speech causes to its victims. Utilizing a particular real-world example, this paper illustrates how these defining background conditions and intersubjective relations influence the harm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  22
    Automatic Facial Expression Recognition in Standardized and Non-standardized Emotional Expressions.Theresa Küntzler, T. Tim A. Höfling & Georg W. Alpers - 2021 - Frontiers in Psychology 12:627561.
    Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  30
    Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers.Hartmut Meister, Stefan Schreitmüller, Magdalene Ortmann, Sebastian Rählmann & Martin Walger - 2016 - Frontiers in Psychology 7.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Autonomy, free speech and automatic behaviour.Andrés Moles - 2006 - Res Publica 13 (1):53-75.
    One of the strongest defences of free speech holds that autonomy requires the protection of speech. In this paper I examine five conditions that autonomy must satisfy. I survey recent research in social psychology regarding automatic behaviour, and a challenge to autonomy is articulated. I argue that a plausible strategy for neutralising some of the autonomy-threatening automatic responses consists in avoiding the exposure to the environmental features that trigger them. If this is so, we can good (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  50.  73
    Recognition of continuous speech requires top-down processing.Kenneth N. Stevens - 2000 - Behavioral and Brain Sciences 23 (3):348-348.
    The proposition that feedback is never necessary in speech recognition is examined for utterances consisting of sequences of words. In running speech the features near word boundaries are often modified according to language-dependent rules. Application of these rules during word recognition requires top-down processing. Because isolated words are not usually modified by rules, their recognition could be achieved by bottom-up processing only.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 966