Results for 'Large Language Models'

964 found
Order:
See also
  1.  91
    Large language models and linguistic intentionality.Jumbly Grindrod - 2024 - Synthese 204 (2):1-24.
    Do large language models like Chat-GPT or Claude meaningfully use the words they produce? Or are they merely clever prediction machines, simulating language use by producing statistically plausible text? There have already been some initial attempts to answer this question by showing that these models meet the criteria for entering meaningful states according to metasemantic theories of mental content. In this paper, I will argue for a different approach—that we should instead consider whether language (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  7
    Can Large Language Models Counter the Recent Decline in Literacy Levels? An Important Role for Cognitive Science.Falk Huettig & Morten H. Christiansen - 2024 - Cognitive Science 48 (8):e13487.
    Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  17
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5. Can large language models help solve the cost problem for the right to explanation?Lauritz Munch & Jens Christian Bjerring - forthcoming - Journal of Medical Ethics.
    By now a consensus has emerged that people, when subjected to high-stakes decisions through automated decision systems, have a moral right to have these decisions explained to them. However, furnishing such explanations can be costly. So the right to an explanation creates what we call the cost problem: providing subjects of automated decisions with appropriate explanations of the grounds of these decisions can be costly for the companies and organisations that use these automated decision systems. In this paper, we explore (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  84
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  7.  17
    Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  75
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  9.  28
    Large language models and their big bullshit potential.Sarah A. Fisher - 2024 - Ethics and Information Technology 26 (4):1-8.
    Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  3
    Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors.Los Angeles - 2024 - Metaphor and Symbol 39 (4):296-309.
    Despite the exceptional performance of large language models (LLMs) on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the interpretation of novel metaphors. Here we assessed the ability of GPT-4, a state-of-the-art large language model, to provide natural-language interpretations of a recent AI benchmark (Fig-QA dataset), novel literary metaphors drawn from (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  56
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  13.  6
    Can large language models apply the law?Henrique Marcos - forthcoming - AI and Society:1-10.
    This paper asks whether large language models (LLMs) can apply the law. It does not question whether LLMs should apply the law. Instead, it distinguishes between two interpretations of the ‘can’ question. One, can LLMs apply the law like ordinary individuals? Two, can LLMs apply the law in the same manner as judges? The study examines D’Almeida’s theory of law application, divided into inferential and pragmatic law application. It argues that his account of pragmatic law application can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  6
    Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors.Nicholas Ichien, Dušan Stamenković & Keith J. Holyoak - 2024 - Metaphor and Symbol 39 (4):296-309.
    Despite the exceptional performance of large language models (LLMs) on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the interpretation of novel metaphors. Here we assessed the ability of GPT-4, a state-of-the-art large language model, to provide natural-language interpretations of a recent AI benchmark (Fig-QA dataset), novel literary metaphors drawn from (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  16.  41
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - 2024 - Journal of Medical Ethics 50 (9):653-654.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  26
    (1 other version)Large language models and their role in modern scientific discoveries.В. Ю Филимонов - 2024 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 1:42-57.
    Today, large language models are very powerful, informational and analytical tools that significantly accelerate most of the existing methods and methodologies for processing informational processes. Scientific information is of particular importance in this capacity, which gradually involves the power of large language models. This interaction of science and qualitative new opportunities for working with information lead us to new, unique scientific discoveries, their great quantitative diversity. There is an acceleration of scientific research, a reduction (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current (...): for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   29 citations  
  19. (1 other version)Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  20.  64
    Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21.  32
    Do Large Language Models Understand? 천현득 - 2023 - CHUL HAK SA SANG - Journal of Philosophical Ideas 90 (90):75-105.
    이 글은 챗지피티와 같은 생성형 언어모형이 이해를 가지는지 검토한다. 우선, 챗지피티의 기본 골격을 이루는 트랜스포머(Transformer) 구조의 작동방식을 간략히 소개한 후, 나는 이해를 고유하게 언어적인 이해와 인지적인 이해로 구분하며, 더 나아가 인지적 이해는 인식론적 이해와 의미론적 이해로 구분될 수 있음을 보인다. 이러한 구분에 따라, 대형언어모형은 언어적 이해는 가질 수 있지만 좋은 인지적 이해를 가지지 않음을 주장한다. 특히, 목적의미론을 기반으로 대형언어모형이 의미론적 이해를 가질 수 있다고 주장하는 코엘로 몰로와 밀리에르(2023)의 논변을 비판한다.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  45
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  15
    Applicability of large language models and generative models for legal case judgement summarization.Aniket Deroy, Kripabandhu Ghosh & Saptarshi Ghosh - forthcoming - Artificial Intelligence and Law:1-44.
    Automatic summarization of legal case judgements, which are known to be long and complex, has traditionally been tried via extractive summarization models. In recent years, generative models including abstractive summarization models and Large language models (LLMs) have gained huge popularity. In this paper, we explore the applicability of such models for legal case judgement summarization. We applied various domain-specific abstractive summarization models and general-domain LLMs as well as extractive summarization models over (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  8
    Large language models have divergent effects on self-perceptions of mind and the attributes considered uniquely human.Oliver L. Jacobs, Farid Pazhoohi & Alan Kingstone - 2024 - Consciousness and Cognition 124 (C):103733.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  14
    Imitation and Large Language Models.Éloïse Boisseau - 2024 - Minds and Machines 34 (4):1-24.
    The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  35
    Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely.Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko & Alessandro Lenci - 2023 - Cognitive Science 47 (11):e13386.
    Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - forthcoming - Social Epistemology.
    Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Ontologies, arguments, and Large-Language Models.John Beverley, Francesco Franda, Hedi Karray, Dan Maxwell, Carter Benson & Barry Smith - 2024 - In Ítalo Oliveira (ed.), Joint Ontologies Workshops (JOWO). Twente, Netherlands: CEUR. pp. 1-9.
    Abstract The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly, there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification for model outputs as well as traceability concerning the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  9
    Easy-read and large language models: on the ethical dimensions of LLM-based text simplification.Nils Freyer, Hendrik Kempt & Lars Klöser - 2024 - Ethics and Information Technology 26 (3):1-10.
    The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - forthcoming - AI and Society.
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31.  18
    The rise of large language models: challenges for Critical Discourse Studies.Mathew Gillings, Tobias Kohn & Gerlinde Mautner - forthcoming - Critical Discourse Studies.
    Large language models (LLMs) such as ChatGPT are opening up new areas of research and teaching potential across a variety of domains. The purpose of the present conceptual paper is to map this new terrain from the point of view of Critical Discourse Studies (CDS). We demonstrate that the usage of LLMs raises concerns that definitely fall within the remit of CDS; among them, power and inequality. After an initial explanation of LLMs, we focus on three key (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33. Large language models and the patterns of human language use.Christoph Durt & Thomas Fuchs - 2024 - In Marco Cavallaro & Nicolas de Warren (eds.), Phenomenologies of the digital age: the virtual, the fictional, the magical. New York, NY: Routledge.
     
    Export citation  
     
    Bookmark  
  34. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Personhood and AI: Why large language models don’t understand us.Jacob Browning - 2023 - AI and Society 39 (5):2499-2506.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37.  55
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  44
    Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy.Niina Zuber & Jan Gogoll - 2024 - Philosophies 9 (1):13.
    In the era of generative AI and specifically large language models (LLMs), exemplified by ChatGPT, the intersection of artificial intelligence and human reasoning has become a focal point of global attention. Unlike conventional search engines, LLMs go beyond mere information retrieval, entering into the realm of discourse culture. Their outputs mimic well-considered, independent opinions or statements of facts, presenting a pretense of wisdom. This paper explores the potential transformative impact of LLMs on democratic societies. It delves into (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. The use of large language models as scaffolds for proleptic reasoning.Olya Kudina, Brian Ballsun-Stanton & Mark Alfano - forthcoming - Asian Journal of Philosophy.
    This paper examines the potential educational uses of chat based Large Language Models (LLMs), moving past initial hype and skepticism. Although LLM outputs often evoke fascination and resemble human writing, they are unpredictable and must be used with discernment. Several metaphors—like calculators, cars, and drunk tutors—highlight distinct models for student interactions with LLMs, which we explore in the paper. We suggest that LLMs hold a potential in students’ learning by fostering proleptic reasoning through scaffolding, i.e., presenting (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  83
    The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  11
    Evaluating large language models’ ability to generate interpretive arguments.Zaid Marji & John Licato - 2024 - Argument and Computation:1-51.
    In natural language understanding, a crucial goal is correctly interpreting open-textured phrases. In practice, disagreements over the meanings of open-textured phrases are often resolved through the generation and evaluation of interpretive arguments, arguments designed to support or attack a specific interpretation of an expression within a document. In this paper, we discuss some of our work towards the goal of automatically generating and evaluating interpretive arguments. We have curated a set of rules from the code of ethics of various (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  14
    On the creativity of large language models.Giorgio Franceschelli & Mirco Musolesi - forthcoming - AI and Society:1-11.
    Large language models (LLMs) are revolutionizing several areas of Artificial Intelligence. One of the most remarkable applications is creative writing, e.g., poetry or storytelling: the generated outputs are often of astonishing quality. However, a natural question arises: can LLMs be really considered creative? In this article, we first analyze the development of LLMs under the lens of creativity theories, investigating the key open questions and challenges. In particular, we focus our discussion on the dimensions of value, novelty, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  5
    Causality-inspired legal provision selection with large language model-based explanation.Zheng Wang, Yuanzhi Ding, Caiyuan Wu, Yuzhen Guo & Wei Zhou - forthcoming - Artificial Intelligence and Law:1-25.
    Accurate identification of legal provisions is crucial for adjudicating criminal cases, but the complexity and volume of legal texts pose significant challenges for legal professionals. This paper addresses these challenges by introducing a novel legal provision selection framework that transforms the task from a simple classification problem into a sophisticated system combining semantic matching with causal relationship learning. Leveraging large language models, our approach enhances the understanding and interpretation of legal language, by extracting nuanced features from (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46. Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47.  2
    Augmenting research consent: should large language models (LLMs) be used for informed consent to clinical research?Jemima W. Allen, Owen Schaefer, Sebastian Porsdam Mann, Brian D. Earp & Dominic Wilkinson - forthcoming - Research Ethics.
    The integration of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI’s ChatGPT, into clinical research could significantly enhance the informed consent process. This paper critically examines the ethical implications of employing LLMs to facilitate consent in clinical research. LLMs could offer considerable benefits, such as improving participant understanding and engagement, broadening participants’ access to the relevant information for informed consent and increasing the efficiency of consent procedures. However, these theoretical advantages are accompanied by ethical risks, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  81
    Ontologies in the era of large language models – a perspective.Fabian Neuhaus - 2023 - Applied ontology 18 (4):399-407.
    The potential of large language models (LLM) has captured the imagination of the public and researchers alike. In contrast to previous generations of machine learning models, LLMs are general-purpose tools, which can communicate with humans. In particular, they are able to define terms and answer factual questions based on some internally represented knowledge. Thus, LLMs support functionalities that are closely related to ontologies. In this perspective article, I will discuss the consequences of the advent of LLMs (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  39
    Why Personalized Large Language Models Fail to Do What Ethics is All About.Sebastian Laacke & Charlotte Gauckler - 2023 - American Journal of Bioethics 23 (10):60-63.
    Porsdam Mann and colleagues provide an overview of opportunities and risks associated with the use of personalized large language models (LLMs) for text production in bio)ethics (Porsdam Mann et al...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  26
    The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 964