Results for 'Multimodal expression'

976 found
Order:
  1.  48
    Chapter 18. multimodal expressions of the human victim is animal metaphor in horror films.Eduardo Urios-Aparisi & Charles J. Forceville - 2009 - In Eduardo Urios-Aparisi & Charles J. Forceville (eds.), Multimodal Metaphor. Mouton de Gruyter.
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  2. The multimodal construction of political personae through the strategic management of semiotic resources of emotion expression.Gheorghe-Ilie Farte & Nicolae-Sorin Dragan - 2022 - Social Semiotics 32 (3):1-25.
    This paper presents an analytical framework for analyzing how multimodal resources of emotion expression are semiotically materialized in discursive interactions specific to political discourse. Interested in how political personae are emotionally constructed through multimodal meaning-making practices, our analysis model assumes an interdisciplinary perspective, which integrates facial expression analysis – using FaceReader™ software –, the theory of emotional arcs and bodily actions (hand gestures) analysis that express emotions, in the analytical framework of multimodality. The results show how (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  59
    Method Development for Multimodal Data Corpus Analysis of Expressive Instrumental Music Performance.Federico Ghelli Visi, Stefan Östersjö, Robert Ek & Ulrik Röijezon - 2020 - Frontiers in Psychology 11.
    Musical performance is a multimodal experience, for performers and listeners alike. This paper reports on a pilot study which constitutes the first step toward a comprehensive approach to the experience of music as performed. We aim at bridging the gap between qualitative and quantitative approaches, by combining methods for data collection. The purpose is to build a data corpus containing multimodal measures linked to high-level subjective observations. This will allow for a systematic inclusion of the knowledge of music (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  21
    The multimodal interactional organization of tasting: Practices of tasting cheese in gourmet shops.Lorenza Mondada - 2018 - Discourse Studies 20 (6):743-769.
    Taste is a central sense for humans and animals, and it has been largely studied either from physiological and neurological approaches or from socio-cultural ones. This paper adopts another view, focused on the activity of tasting rather than on the sense of taste, approached within the perspective of ethnomethodology and multimodal conversation analysis. This view addresses the activity of tasting as it is interactionally organized in specific social settings, observed in a naturalistic way, on the basis of video recordings. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  5.  83
    Solving Multimodal Paradoxes.Federico Pailos & Lucas Rosenblatt - 2014 - Theoria 81 (3):192-210.
    Recently, it has been observed that the usual type-theoretic restrictions are not enough to block certain paradoxes involving two or more predicates. In particular, when we have a self-referential language containing modal predicates, new paradoxes might appear even if there are type restrictions for the principles governing those predicates. In this article we consider two type-theoretic solutions to multimodal paradoxes. The first one adds types for each of the modal predicates. We argue that there are a number of problems (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  18
    Realization of Self-Adaptive Higher Teaching Management Based Upon Expression and Speech Multimodal Emotion Recognition.Huihui Zhou & Zheng Liu - 2022 - Frontiers in Psychology 13.
    In the process of communication between people, everyone will have emotions, and different emotions will have different effects on communication. With the help of external performance information accompanied by emotional expression, such as emotional speech signals or facial expressions, people can easily communicate with each other and understand each other. Emotion recognition is an important network of affective computers and research centers for signal processing, pattern detection, artificial intelligence, and human-computer interaction. Emotions convey important information in human communication and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  20
    Interactive Multimodal Television Media Adaptive Visual Communication Based on Clustering Algorithm.Huayuan Yang & Xin Zhang - 2020 - Complexity 2020:1-9.
    This article starts with the environmental changes in human cognition, analyzes the virtual as the main feature of visual perception under digital technology, and explores the transition from passive to active human cognitive activities. With the diversified understanding of visual information, human contradiction of memory also began to become prominent. Aiming at the problem that the existing multimodal TV media recognition methods have low recognition rate of unknown application layer protocols, an adaptive clustering method for identifying unknown application layer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  74
    Probative Norms for Multimodal Visual Arguments.J. Anthony Blair - 2015 - Argumentation 29 (2):217-233.
    The question, “What norms are appropriate for the evaluation of the probative merits of visual arguments?” underlies the investigation of this paper. The notions of argument and of multimodal visual argument employed in the study are explained. Then four multimodal visual arguments are analyzed and their probative merits assessed. It turns out to be possible to judge these qualities using the same criteria that apply to verbally expressed arguments. Since the sample is small and not claimed to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  9.  17
    Multimodal enactment of characters in conference presentations.Noelia Ruiz-Madrid & Julia Valeiras-Jurado - 2019 - Discourse Studies 21 (5):561-583.
    In academic oral genres such as conference presentations, speakers resort to more than words to convey meaning. Research also suggests that persuasion, an important element of the communicative purpose of conference presentations, is frequently achieved through a combination of semiotic modes. Therefore, a skilful orchestration of these modes can be considered key to achieving effective communication in this genre. However, our understanding of persuasion has often focused on specific elements of the message considered in isolation and mainly from the linguistic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  10.  29
    Multimodal Abduction.Lorenzo Magnani - 2008 - Proceedings of the Xxii World Congress of Philosophy 34:21-24.
    In this paper I contend that abduction is essentially multimodal, in that both data and hypotheses can have a full range of verbal and sensory representations, involving words, sights, images, smells, etc. but also kinesthetic experiences and other feelings such as pain, and thus all sensory modalities. The kinesthetic aspects simply explain abductive reasoning is basically manipulative, both linguistic and non linguistic signs have an internal semiotic life, as particular configurations of neural networks and chemical distributions (and in terms (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Reconstructing Multimodal Arguments in Advertisements: Combining Pragmatics and Argumentation Theory.Fabrizio Macagno & Rosalice Botelho Wakim Souza Pinto - 2021 - Argumentation 35 (1):141-176.
    The analysis of multimodal argumentation in advertising is a crucial and problematic area of research. While its importance is growing in a time characterized by images and pictorial messages, the methods used for interpreting and reconstructing the structure of arguments expressed through verbal and visual means capture only isolated dimensions of this complex phenomenon. This paper intends to propose and illustrate a methodology for the reconstruction and analysis of “double-mode” arguments in advertisements, combining the instruments developed in social semiotics, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  12.  8
    Agreeing/Disagreeing in a Dialogue: Multimodal Patterns of Its Expression.Laszlo Hunyadi - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  28
    Online Construction of Multimodal Metaphors In Murnau’s Movie Faust.José Manuel Ureña Gómez-Moreno - 2017 - Metaphor and Symbol 32 (3):192-210.
    This study explores multimodal metaphors and metonymies in Faust, a German Expressionist silent fiction movie by Murnau. The article combines principles of psychocinematics, an interdisciplinary scientific field of enquiry, with the multimodal metaphor and expressive movement model, which looks into the temporal dynamics of metaphoric meaning-making by movie watchers. It is shown that interrelating both film-analytic approaches provides a deeper and more comprehensive insight into how figurative thought influences psycho-cognitive processes in the moviegoer’s mind as they dynamically unfold (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  14.  24
    Multimodal Evidence of Atypical Processing of Eye Gaze and Facial Emotion in Children With Autistic Traits.Shadi Bagherzadeh-Azbari, Gilbert Ka Bo Lau, Guang Ouyang, Changsong Zhou, Andrea Hildebrandt, Werner Sommer & Ming Lui - 2022 - Frontiers in Human Neuroscience 16.
    According to the shared signal hypothesis the impact of facial expressions on emotion processing partially depends on whether the gaze is directed toward or away from the observer. In autism spectrum disorder several aspects of face processing have been found to be atypical, including attention to eye gaze and the identification of emotional expressions. However, there is little research on how gaze direction affects emotional expression processing in typically developing individuals and in those with ASD. This question is investigated (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  12
    Multimodal higher education: digital, social and environmental relationalities.Nataša Lacković - 2024 - New York, NY: Routledge. Edited by Alin Olteanu.
    This book envisions a relational and multimodal conceptualisation of higher education, which advocates for knowledge to be analyzed under three relational dimensions: social, digital and environmental. The volume draws on interdisciplinary approaches grounded in Peirceian semiotics, in exploring an integration of these dimensions in higher education theory and practice. The book situates learning in an awareness of the environment, grounded in principles of interconnectedness and flattened hierarchies. The volume features practical case studies through dialogues with higher education teachers, presenting (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  32
    Primary Metaphors and Multimodal Metaphors of Food: Examples from an Intercultural Food Design Event.Ming-Yu Tseng - 2017 - Metaphor and Symbol 32 (3):211-229.
    The conceptual metaphor “THOUGHT IS FOOD” is exemplified in many verbal expressions. Nevertheless, how food metaphors are realized through the actual dining experience remains unexplored. Based on a food design event called EATAIPEI that took place in the London Design Festival in 2015, one aimed at promoting Taipei as World Design Capital 2016, this article analyzes how the multimodal metaphors of food were creatively represented and elaborated within it. This study proposes an analytical framework that combines insights from cognitive (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  18
    Beijing Olympics and Beijing opera: A multimodal metaphor in a CCTV Olympics commercial.Ning Yu - 2011 - Cognitive Linguistics 22 (3):595-628.
    This paper is a cognitive semantic analysis of a CCTV educational commercial, which is one of a series designed and produced in preparation for, and in celebration of, the Beijing 2008 Olympic Games. Called the “Beijing Opera Episode”, this TV commercial converges on the theme: “To mount the stage of the world, and to put on a show of China”. That is, China sees her hosting of the 2008 Olympics by Beijing as a great opportunity for her to step onto (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  68
    Intersemiotic Complementarity in Legal Cartoons: An Ideational Multimodal Analysis.Terry D. Royce - 2015 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 28 (4):719-744.
    The analysis of legal communication has almost exclusively been the domain of discourse analysts focusing on the ways that the linguistic system is used to realise legal meanings. Multimodal discourse analysis, where visual forms in combination with traditional linguistic expressions co-occur, is now also an area of expanding interest. Taking a Systemic Functional Linguistics “social semiotic” perspective, this paper applies and critiques an analytical framework that has been used for examining intersemiotic complementarity in various types of page-based multimodal (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19.  17
    How to Induce and Recognize Facial Expression of Emotions by Using Past Emotional Memories: A Multimodal Neuroscientific Algorithm.Michela Balconi & Giulia Fronda - 2021 - Frontiers in Psychology 12.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition.Nastaran Saffaryazdi, Syed Talal Wasim, Kuldeep Dileep, Alireza Farrokhi Nia, Suranga Nanayakkara, Elizabeth Broadbent & Mark Billinghurst - 2022 - Frontiers in Psychology 13:864047.
    Emotions are multimodal processes that play a crucial role in our everyday lives. Recognizing emotions is becoming more critical in a wide range of application domains such as healthcare, education, human-computer interaction, Virtual Reality, intelligent agents, entertainment, and more. Facial macro-expressions or intense facial expressions are the most common modalities in recognizing emotional states. However, since facial expressions can be voluntarily controlled, they may not accurately represent emotional states. Earlier studies have shown that facial micro-expressions are more reliable than (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  16
    Effects of Scale on Multimodal Deixis: Evidence From Quiahije Chatino.Kate Mesh, Emiliana Cruz, Joost van de Weijer, Niclas Burenhult & Marianne Gullberg - 2021 - Frontiers in Psychology 11.
    As humans interact in the world, they often orient one another's attention to objects through the use of spoken demonstrative expressions and head and/or hand movements to point to the objects. Although indicating behaviors have frequently been studied in lab settings, we know surprisingly little about how demonstratives and pointing are used to coordinate attention in large-scale space and in natural contexts. This study investigates how speakers of Quiahije Chatino, an indigenous language of Mexico, use demonstratives and pointing to give (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  63
    Facial Expression in Nonhuman Animals.Bridget M. Waller & Jérôme Micheletta - 2013 - Emotion Review 5 (1):54-59.
    Many nonhuman animals produce facial expressions which sometimes bear clear resemblance to the facial expressions seen in humans. An understanding of this evolutionary continuity between species, and how this relates to social and ecological variables, can help elucidate the meaning, function, and evolution of facial expression. This aim, however, requires researchers to overcome the theoretical and methodological differences in how human and nonhuman facial expressions are approached. Here, we review the literature relating to nonhuman facial expressions and suggest future (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  23.  34
    Toward a multimodal and continuous approach of infant-adult interactions.Marianne Jover & Maya Gratier - 2023 - Interaction Studies 24 (1):5-47.
    Adult-infant early dyadic interactions have been extensively explored by developmental psychologists. Around the age of 2 months, infants already demonstrate complex, delicate and very sensitive behaviors that seem to express their ability to interact and share emotions with their caregivers. This paper presents 3 pilot studies of parent-infant dyadic interaction in various set-ups. The first two present longitudinal data collected on two infants aged between 1 and 6 months and their mothers. We analyzed the development of coordination between them, at (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  17
    The Revival of Multimodal Aesthetics.Fay Zika - 2018 - Proceedings of the XXIII World Congress of Philosophy 1:359-364.
    One of the recent areas of discussion in aesthetics and the visual arts is the tension between the so-called “ocularcentric” tradition, on the one hand, and the tendency to move in a multisensory, multimodal direction, on the other. My aim in this paper is to bring out this tension by tracing it in a number of moments; firstly, in the late 19th-early 20th century discussion, concerning the “total art work” and the contribution of synaesthesia; secondly, the reaction to what (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  36
    A first step towardsmodeling semistructured data in hybrid multimodal logic.Nicole Bidoit, Serenella Cerrito & Virginie Thion - 2004 - Journal of Applied Non-Classical Logics 14 (4):447-475.
    XML documents and, more generally, semistructured data, can be seen as labelled graphs. In this paper we set a correspondence between such graphs and the models of a language of hybrid multimodal logic. This allows us to characterize a schema for semistructured data as a formula of hybrid multimodal logic, and instances of the schema as models of this formula. We also investigate how to express in such a logic integrity constraints on semistructured data, in particular some classes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  26.  21
    “Noah’s Family Was on Lockdown”: Multimodal Metaphors in Religious Coronavirus-Related Internet Memes in the Nigerian WhatsApp Space.Oluwabunmi O. Oyebode & Foluke O. Unuabonah - 2022 - Metaphor and Symbol 37 (4):287-302.
    This paper examines the forms and functions of religious Internet memes that relate to Covid-19, with a view to identifying the conceptual metaphors that underlie the creation of the memes. The data, which consist of thirty religious Internet memes shared in the Nigerian WhatsApp space, are analyzed qualitatively using the categorization of religious Internet memes, and the concept of multimodal metaphors. The memes contain (non-)linguistic metaphors such as the picture of Biblical Noah’s ark and expressions such as Noah’s family (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  23
    Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People.Ezgi Mamus, Laura J. Speed, Lilia Rissman, Asifa Majid & Aslı Özyürek - 2023 - Cognitive Science 47 (1):e13228.
    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  39
    Distant time, distant gesture: speech and gesture correlate to express temporal distance.Javier Valenzuela & Daniel Alcaraz Carrión - 2021 - Semiotica 2021 (241):159-183.
    This study investigates whether there is a relation between the semantics of linguistic expressions that indicate temporal distance and the spatial properties of their co-speech gestures. To this date, research on time gestures has focused on features such as gesture axis, direction, and shape. Here we focus on a gesture property that has been overlooked so far: the distance of the gesture in relation to the body. To achieve this, we investigate two types of temporal linguistic expressions are addressed: proximal (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  13
    Audiovisual Scene Analysis: A Gestalt Paradigm in Full Development for the Study of the Multimodality of Language.Émilie Troille - 2011 - Iris 32:179-196.
    In this contribution we will approach language and images in different modalities: speech and face, anticipated and imagined movements, illusions on the sound by the image. It will be the opportunity for us to revisit the Gestalt concepts which were considered obsolete since structuralism in Humanities. As instantiated by Gilbert Durand in The Anthropological Structures of the Imaginary (1999, French 1st ed. 1960), we shall recall that Gestalt is not—even implicitly—an exclusively static approach to cognition. On the contrary we will (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  13
    Enterprise Strategic Management From the Perspective of Business Ecosystem Construction Based on Multimodal Emotion Recognition.Wei Bi, Yongzhen Xie, Zheng Dong & Hongshen Li - 2022 - Frontiers in Psychology 13.
    Emotion recognition is an important part of building an intelligent human-computer interaction system and plays an important role in human-computer interaction. Often, people express their feelings through a variety of symbols, such as words and facial expressions. A business ecosystem is an economic community based on interacting organizations and individuals. Over time, they develop their capabilities and roles together and tend to develop themselves in the direction of one or more central enterprises. This paper aims to study a multimodal (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  43
    Weaponized iconoclasm in Internet memes featuring the expression ‘Fake News’.Christopher A. Smith - 2019 - Discourse and Communication 13 (3):303-319.
    The expression ‘Fake News’ inside Internet memes engenders significant online virulence, possibly heralding an iconoclastic emergence of weaponized propaganda for assaulting agencies reared on public trust. Internet memes are multimodal artifacts featuring ideological singularities designed for ‘flash’ consumption, often composed by numerous voices echoing popular, online culture. This study proposes that ‘Fake News’ Internet memes are weaponized iconoclastic multimodal propaganda discourse and attempts to delineate them as such by asking: What power relations and ideologies do Internet memes (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  24
    What can cognitive linguistics tell us about language-image relations? A multidimensional approach to intersemiotic convergence in multimodal texts.Javier Marmol Queralto & Christopher Hart - 2021 - Cognitive Linguistics 32 (4):529-562.
    In contrast to symbol-manipulation approaches, Cognitive Linguistics offers a modal rather than an amodal account of meaning in language. From this perspective, the meanings attached to linguistic expressions, in the form of conceptualisations, have various properties in common with visual forms of representation. This makes Cognitive Linguistics a potentially useful framework for identifying and analysing language-image relations in multimodal texts. In this paper, we investigate language-image relations with a specific focus on intersemiotic convergence. Analogous with research on gesture, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Wundt and Bühler on Gestural Expression: From Psycho-Physical Mirroring to the Diacrisis.Basil Vassilicos - 2021 - In Arnaud Dewalque, Charlotte Gauvry & Sébastien Richard (eds.), Philosophy of Language in the Brentano School: Reassessing the Brentanian Legacy. Palgrave-Macmillan. pp. 279-297.
    This paper explores how Wundt’s and Bühler’s respective conceptions of gestural expression have implications for how each conceives of what, in broad terms, may be understood as a ‘grammar of gestures’: that is, the rules for the formation and performance of gestures with and without speech. Unlike previous scholarship that has looked at the relationship of Wundt and Bühler, the aim here will be to give particular attention to the relevance of their respective accounts for current philosophical and linguistic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  84
    Automatic facial expression interpretation: where human computer interaction, artificial intelligence and cognitive science intersect.Christine L. Lisetti & Diane J. Schiano - 2000 - Pragmatics and Cognition 8 (1):185-236.
    We discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  22
    MindLink-Eumpy: An Open-Source Python Toolbox for Multimodal Emotion Recognition.Ruixin Li, Yan Liang, Xiaojian Liu, Bingbing Wang, Wenxin Huang, Zhaoxin Cai, Yaoguang Ye, Lina Qiu & Jiahui Pan - 2021 - Frontiers in Human Neuroscience 15.
    Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  15
    ‘It all begins with a teacher’: A multimodal critical discourse analysis of Singapore’s teacher recruitment videos.Peter Teo - 2021 - Discourse and Communication 15 (3):330-348.
    This study focuses on a series of videos aimed at teacher recruitment in Singapore and how they are used as an ideological tool for persuasion. By adopting a multimodal critical discourse analysis approach to focus on affect, it examines how these videos create and promulgate the ideology of an ideal teacher as one who is caring, encouraging and supportive of students. The analysis shows how affect is not only embodied in and performed by the primary protagonists in the video (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  14
    Faces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech.Maëva Michon, José Zamorano-Abramson & Francisco Aboitiz - 2022 - Frontiers in Psychology 13.
    While influential works since the 1970s have widely assumed that imitation is an innate skill in both human and non-human primate neonates, recent empirical studies and meta-analyses have challenged this view, indicating other forms of reward-based learning as relevant factors in the development of social behavior. The visual input translation into matching motor output that underlies imitation abilities instead seems to develop along with social interactions and sensorimotor experience during infancy and childhood. Recently, a new visual stream has been identified (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  15
    The quality enhancement of action research on primary school English instruction in Chinese rural areas: An analysis based on multimodality.Haiyan Zhang, Cunxin Han, Hongyan Ma & Liusheng Wang - 2022 - Frontiers in Psychology 13.
    This study investigates the influences of action research on primary school English instruction from five dimensions in the classroom, viz., types of questions, language errors, gestures, facial expressions, and interpersonal distance. Four English teachers’ 9 real classroom teaching videos before and after action research are collected and annotated by using ELAN software. The results show that primary school English teachers in Chinese rural areas prefer closed questions to open questions; They make some language errors; Deictic gestures are the most common (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  53
    Performing Orders: Speech Acts, Facial Expressions and Gender Bias.Filippo Domaneschi, Marcello Passarelli & Luca Andrighetto - 2018 - Journal of Cognition and Culture 18 (3-4):343-357.
    The business of a sentence is not only to describe some state of affairs but also to perform other kinds of speech acts like ordering, suggesting, asking, etc. Understanding the kind of action performed by a speaker who utters a sentence is a multimodal process which involves the computing of verbal and non-verbal information. This work aims at investigating if the understanding of a speech act is affected by the gender of the actor that produces the utterance in combination (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40.  12
    The Particle Jako (“Like”) in Spoken Czech: From Expressing Comparison to Mobilizing Affiliative Responses.Florence Oloff - 2022 - Frontiers in Psychology 12.
    This contribution investigates the use of the Czech particle jako in naturally occurring conversations. Inspired by interactional research on unfinished or suspended utterances and on turn-final conjunctions and particles, the analysis aims to trace the possible development of jako from conjunction to a tag-like particle that can be exploited for mobilizing affiliative responses. Traditionally, jako has been described as conjunction used for comparing two elements or for providing a specification of a first element [“X like Y”]. In spoken Czech, however, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  1
    Cultural Reflections in Pakistani Rap Music: A Study of 'the Sibbi song' and 'Karachi Chal’.Maliha Ameen - 2024 - Journal of Social Sciences and Humanities 63 (1):23-58.
    _This study probes into a comparative multimodal discourse analysis (MDA) of two influential Pakistani rap music videos ‘The Sibbi Song’ by Abid Brohi and ‘Karachi Chal’ performed by Young Stunners, a rap duo consisting of Talha Anjum and Talhah Yunus to explore their portrayal of societal hierarchies and individual identity. Drawing upon the theoretical framework of MDA the analysis has discovered the linguistic, visual, aural, spatial, and gestural elements of each video. The first video portrays the protagonist’s journey from (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. The Brand Imaginarium, or on the iconic constitution of brand image.George Rossolatos - 2015 - In Handbook of Brand Semiotics. Kassel: Kassel University Press. pp. 390-457.
    Brand image constitutes one of the most salient, over-defined, heavily explored and multifariously operationalized conceptual constructs in marketing theory and practice. In this Chapter, definitions of brand image that have been offered by marketing scholars will be critically addressed in the context of a culturally oriented discussion, informed by the semiotic notion of iconicity. This cultural bend, in conjunction with the concept’s semiotic contextualization, are expected both to dispel terminological confusions in the either inter-changeable or fuzzily differentiated employment of such (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  50
    Effects of Ambiguous Gestures and Language on the Time Course of Reference Resolution.Max M. Louwerse & Adrian Bangerter - 2010 - Cognitive Science 34 (8):1517-1529.
    Two eye-tracking experiments investigated how and when pointing gestures and location descriptions affect target identification. The experiments investigated the effect of gestures and referring expressions on the time course of fixations to the target, using videos of human gestures and human voice, and animated gestures and synthesized speech. Ambiguous, yet informative pointing gestures elicited attention and facilitated target identification, akin to verbal location descriptions. Moreover, target identification was superior when both pointing gestures and verbal location descriptions were used. These findings (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  44.  3
    Relational Processes and Social Projections in Facebook Selfie-Quotation Juxtaposition: An Exploratory Study.Yuan Xiong & Leonardo O. Munalim - forthcoming - Human Affairs.
    The juxtaposition between selfie and quotation is an emerging user-generated Facebook content. This exploratory study is the first to show how Facebook users self-represent themselves through Relational Processes, based on the intertextuality between these verbal and visual modes. Relational Processes refer to the process of characterizing, identifying or describing a person or entity. In this study, 132 Facebook quotations were categorized into three types of Relational Processes based on the users’ selfies. The clauses were restated into I-expressions to center the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  46
    More than words : evidence for a Stroop effect of prosody in emotion word processing.Piera Filippi, Sebastian Ocklenburg, Daniel L. Bowling, Larissa Heege, Onur Güntürkün, Albert Newen & Bart de Boer - 2017 - Cognition and Emotion 31 (5):879-891.
    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  46.  7
    How is emotional resonance achieved in storytellings of sadness/distress?Christoph Rühlemann - 2022 - Frontiers in Psychology 13:952119.
    Storytelling pivots around stance seen as a window unto emotion: storytellers project a stance expressing their emotion toward the events and recipients preferably mirror that stance by affiliating with the storyteller’s stance. Whether the recipient’s affiliative stance is at the same time expressive of his/her emotional resonance with the storyteller and of emotional contagion is a question that has recently attracted intriguing research in Physiological Interaction Research. Connecting to this line of inquiry, this paper concerns itself with storytellings of sadness/distress. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  23
    Assessing the subtitling of emotive reactions: a social semiotic approach.Muhammad A. A. Taghian & Ahmad M. Ali - 2023 - Semiotica 2023 (252):51-96.
    This article attempts to evaluate emotive meanings across languages and cultures expressed and elicited semiotically from viewers. It investigates the challenges of subtitling emotive feelings in the American filmHomeless to Harvard(2003) into Arabic. It adopts Paul Thibault’s (2000. The multimodal transcription of a television advertisement: Theory and practice. In Anthony Baldry (ed.),Multimodality and multimediality in the distance learning age, 311–385. Campobasso: Palladino Editore) method of multimodal transcription and Feng and O’Halloran’s (2013. The multimodal representation of emotion in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Deriving polarity effects.Raffaella Bernardi - unknown
    Polarity Items are linguistic expressions known for being a ‘lexically controlled’ phenomenon. In this paper we show how their behavior can be implemented in a deductive system. Further- more, we point out some possible directions to recast the deductive solution into a Tree Ad- joining Grammar system. In particular, we suggest to compare the proof system developed for Multimodal Categorial Grammar (Moot & Puite, 1999) with the Partial Proof Trees proposed in (Joshi & Kulick, 1997).
     
    Export citation  
     
    Bookmark  
  49.  60
    Generalizing Deontic Action Logic.Alessandro Giordani & Matteo Pascucci - 2022 - Studia Logica 110 (4):989-1033.
    We introduce a multimodal framework of deontic action logic which encodes the interaction between two fundamental procedures in normative reasoning: conceptual classification and deontic classification. The expressive power of the framework is noteworthy, since it combines insights from agency logic and dynamic logic, allowing for a representation of many kinds of normative conflicts. We provide a semantic characterization for three axiomatic systems of increasing strength, showing how our approach can be modularly extended in order to get different levels of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  15
    Towards a stratified metafunctional model of animation.Yufei He - 2021 - Semiotica 2021 (239):1-35.
    Animation is widely acknowledged for dynamically visualizing information and has been increasingly used in educational context. However, the growing presence of educational animation has not been accompanied by well-informed studies that focus on the semiotic features of animation. An emerging perspective influenced by Social Semiotics and Systemic Functional Linguistics greatly complements the current trend of animation studies in the field of science education. Studies taking that perspective model animation as stratified systems (consisting of an expression plane and a content (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 976