Results for 'weak AI'

973 found
Order:
  1.  28
    Strong and weak AI narratives: an analytical framework.Paolo Bory, Simone Natale & Christian Katzenbach - forthcoming - AI and Society:1-11.
    The current debate on artificial intelligence (AI) tends to associate AI imaginaries with the vision of a future technology capable of emulating or surpassing human intelligence. This article advocates for a more nuanced analysis of AI imaginaries, distinguishing “strong AI narratives,” i.e., narratives that envision futurable AI technologies that are virtually indistinguishable from humans, from "weak" AI narratives, i.e., narratives that discuss and make sense of the functioning and implications of existing AI technologies. Drawing on the academic literature on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2.  8
    Contre toute attente, autour de Gérard Bensussan. Suivi de, Ostalgérie.Andrea Potestà, Aïcha Liviana Messina & Gérard Bensussan (eds.) - 2021 - Paris: Classiques Garnier.
    Contre toute attente is the manifesto of a thought that refuses passivity and can only endlessly seek the "weak force" of hope for a different, spectral, melancholy future. This book brings together several works around Gerard Bensussan which retrace the paths of his philosophical reflection.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  36
    The Problem of Distinction Between ‘weak AI’ and ‘strong AI’. 김진석 - 2017 - Journal of the Society of Philosophical Studies 117:111-137.
    인공지능을 논의할 때 사람들은 흔히 ‘약한weak’ 인공지능과 ‘강한 strong’ 인공지능의 구별을 사용하고 있다. 이 구별은 인공지능들을 서로 구별할때만 흔히 사용될 뿐 아니라, 인공지능을 인간과 구별하는 데에서도 사용된다. 이 점은 인공지능에 대한 세 가지 유형의 관점에서 살펴볼 수 있다. 첫째는 인간의 창의적인 마음과 인공지능을 구별하는 이론이며, 둘째는 인간의 포괄적인 능력을 강한 지능의 기준으로 삼는 관점이며, 셋째는 인간보다 우월한 종을 강한 인공지능의 기준과 목표로 삼는 관점이다.BR 본 연구는 그 관점들이 전제하는 명제나 주장의 적절성 및 모호성을 살펴볼 것이다. 그러나 본 연구는 다른 한편으로 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Weak Strong AI: An elaboration of the English Reply to the Chinese Room.Ronald L. Chrisley - unknown
    Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to Searle. (...)
     
    Export citation  
     
    Bookmark  
  5.  6
    Virtuous AI?Mariusz Tabaczek - 2024 - Forum Philosophicum: International Journal for Philosophy 29 (2):371-389.
    This paper offers an Aristotelian-Thomistic response to the question whether AI is capable of developing virtue. On the one hand, it could be argued that this is possible on the assumption of the minimalist (thin) definition of virtue as a stable (permanent) and reliable disposition toward an actualization of a given power in the agent (in various circumstances), which effects that agent’s growth in perfection. On the other hand, a closer inquiry into Aquinas’s understanding of both moral and intellectual virtues, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  47
    (1 other version)Did Searle attack strong strong or weak strong AI.Aaron Sloman - 1986 - In A. G. Cohn and & R. J. Thomas, Artificial Intelligence and its Applications. John Wiley and Sons.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong AI. This paper (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  7.  47
    Can AI be a subject like us? A Hegelian speculative-philosophical approach.Ermylos Plevrakis - 2024 - Discover Computing 27 (46).
    Recent breakthroughs in the field of artificial intelligence (AI) have sparked a wide public debate on the potentialities of AI, including the prospect to evolve into a subject comparable to humans. While scientists typically avoid directly addressing this question, philosophers usually tend to largely dismiss such a possibility. This article begins by examining the historical and systematic context favoring this inclination. However, it argues that the speculative philosophy of Georg Wilhelm Friedrich Hegel offers a different perspective. Through an exploration of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  77
    Medium AI and experimental science.Andre Kukla - 1994 - Philosophical Psychology 7 (4):493-5012.
    It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  29
    Using AI and ML to optimize information discovery in under-utilized, Holocaust-related records.Kirsten Strigel Carter, Abby Gondek, William Underwood, Teddy Randby & Richard Marciano - 2022 - AI and Society 37 (3):837-858.
    Digital cultural assets are often thought to exist in separate spheres based on their two principal points of origin: digitized and born digital. Increasingly, advances in digital curation are blurring this dichotomy, by introducing so-called “collections as data,” which regardless of their origination make cultural assets more amenable to the application of new computational tools and methodologies. This paper brings together archivists, scholars, and technologists to demonstrate computational treatments of digital cultural assets using Artificial Intelligence and Machine Learning techniques that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  91
    AI-Aided Moral Enhancement – Exploring Opportunities and Challenges.Andrea Berber - forthcoming - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell (2025). Wiley-Blackwell.
    In this chapter, I introduce three different types of AI-based moral enhancement proposals discussed in the literature – substitutive enhancement, value-driven enhancement, and value-open moral enhancement. I analyse them based on the following criteria: effectiveness, examining whether they bring about tangible moral changes; autonomy, assessing whether they infringe on human autonomy and agency; and developmental impact, considering whether they hinder the development of natural moral skills. This analysis demonstrates that no single approach to AI enhancement can satisfy all proposed criteria, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  19
    Superhuman AI.Gabriele Gramelsberger - 2023 - Philosophisches Jahrbuch 130 (2):81-91.
    The modern program of operationalizing the mind, from Descartes to Kant, in the form of the externalization of human mind functions in logic and calculations, and its continuation in the program of formalization from the middle of the 19th century with Boole, Peirce and Turing, have led to the form of rationality that has become machine rationality: the digital computer as a logical-mathematical machine and algorithms as machine-rational interpretations of human thinking in the form of problem solving and decision making. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  83
    Embodied AI beyond Embodied Cognition and Enactivism.Riccardo Manzotti - 2019 - Philosophies 4 (3):39.
    Over the last three decades, the rise of embodied cognition (EC) articulated in various schools (or versions) of embodied, embedded, extended and enacted cognition (Gallagher’s 4E) has offered AI a way out of traditional computationalism—an approach (or an understanding) loosely referred to as embodied AI. This view has split into various branches ranging from a weak form on the brink of functionalism (loosely represented by Clarks’ parity principle) to a strong form (often corresponding to autopoietic-friendly enactivism) suggesting that body−world (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  13. Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”.Blake H. Dournaee - 2010 - Minds and Machines 20 (2):303-309.
    In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark  
  14. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli, Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  66
    Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy.Andrew McStay - 2020 - Big Data and Society 7 (1).
    By the early 2020s, emotional artificial intelligence will become increasingly present in everyday objects and practices such as assistants, cars, games, mobile phones, wearables, toys, marketing, insurance, policing, education and border controls. There is also keen interest in using these technologies to regulate and optimize the emotional experiences of spaces, such as workplaces, hospitals, prisons, classrooms, travel infrastructures, restaurants, retail and chain stores. Developers frequently claim that their applications do not identify people. Taking the claim at face value, this paper (...)
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  16. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Integrating AI ethics in wildlife conservation AI systems in South Africa: a review, challenges, and future research agenda.Irene Nandutu, Marcellin Atemkeng & Patrice Okouma - 2023 - AI and Society 38 (1):245-257.
    With the increased use of Artificial Intelligence (AI) in wildlife conservation, issues around whether AI-based monitoring tools in wildlife conservation comply with standards regarding AI Ethics are on the rise. This review aims to summarise current debates and identify gaps as well as suggest future research by investigating (1) current AI Ethics and AI Ethics issues in wildlife conservation, (2) Initiatives Stakeholders in AI for wildlife conservation should consider integrating AI Ethics in wildlife conservation. We find that the existing literature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18. The Philosophy of AI and Its Critique.James H. Fetzer - 2003 - In Luciano Floridi, The Blackwell guide to the philosophy of computing and information. Blackwell. pp. 117–134.
    The prelims comprise: Historical Background The Turing Test Physical Machines Symbol Systems The Chinese Room Weak AI Strong AI Folk Psychology Eliminative Materialism Processing Syntax Semantic Engines The Language of Thought Formal Systems Mental Propensities The Frame Problem Minds and Brains Semiotic Systems Critical Differences The Hermeneutic Critique Conventions and Communication Other Minds Intelligent Machines.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  1
    Introduction to Special Section on Virtue in the Loop: Virtue Ethics and Military AI.Jonathan Askonas & Paul Scherz - 2024 - Journal of Military Ethics 23 (3):245-250.
    This essay introduces this special issue on virtue ethics in relation to military AI. It describes the current situation of military AI ethics as following that of AI ethics in general, caught between consequentialism and deontology. Virtue ethics serves as an alternative that can address some of the weaknesses of these dominant forms of ethics. The essay describes how the articles in the issue exemplify the value of virtue-related approaches for these questions, before ending with thoughts for further research.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Logic and AI in China: An Introduction.Fenrong Liu & Kaile Su - 2013 - Minds and Machines 23 (1):1-4.
    The year 2012 has witnessed worldwide celebrations of Alan Turing’s 100th birthday. A great number of conferences and workshops were organized by logicians, computer scientists and researchers in AI, showing the continued flourishing of computer science, and the fruitful interfaces between logic and computer science. Logic is no longer just the concept that Frege had about one hundred years ago, let alone that of Aristotle twenty centuries before. One of the prominent features of contemporary logic is its interdisciplinary character, connecting (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  21. Idealism, realism, pragmatism: three modes of theorising within secular AI ethics.Rune Nyrup & Beba Cibralic - 2024 - In Barry Solemain & I. Glenn Cohen, Research Handbook on Health, AI and the Law. Edward Edgar Publishing. pp. 203-2018.
    Healthcare applications of AI have the potential to produce great benefit, but also come with significant ethical risks. This has brought ethics to the forefront of academic, policy and public debates about AI in healthcare. To help navigate these debates, we distinguish three general modes of ethical theorizing in contemporary secular AI ethics: (1) idealism, which seeks to articulate moral ideals that can be applied to concrete problems; (2) realism, which focuses on understanding complex social realities that underpin ethical problems; (...)
     
    Export citation  
     
    Bookmark  
  22.  25
    AI-informed acting: an Arendtian perspective.Daniil Koloskov - 2024 - Phenomenology and the Cognitive Sciences 23 (5):1171-1188.
    In this paper, I will investigate the possible impact of weak artificial intelligence (more specifically, I will concentrate on deep learning) on human capability of action. For this goal, I will first address Arendt’s philosophy of action, which seeks to emphasize the distinguishing elements of action that set it apart from other forms of human activity. According to Arendt, action should be conceived as praxis, an activity that has its goal in its own very performance. The authentic meaning of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia.Junhua Zhu - forthcoming - AI and Society:1-14.
    Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. to identify the most discussed ethical issues of AI in Chinese academia and those being left out ; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. Against “Democratizing AI”.Johannes Himmelreich - 2023 - AI and Society 38 (4):1333-1346.
    This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  25.  74
    Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.
    The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature and the popular media. This discussion ranges from hyperbolic claims that attribute general intelligence and sentience to LLMs, to the skeptical view that these devices are no more than “stochastic parrots”. I present an overview of some of the weak arguments that have been presented against LLMs, and I consider several of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  26.  95
    How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI text generator.Martin Hinton & Jean H. M. Wagemans - 2023 - Argument and Computation 14 (1):59-74.
    In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  28.  65
    We have to talk about emotional AI and crime.Lena Podoletz - 2023 - AI and Society 38 (3):1067-1082.
    Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29.  34
    Introduction to Special Section on Virtue in the Loop: Virtue Ethics and Military AI.D. C. Washington, I. N. Notre Dame, National Securityhe is Currently Working on Two Books: A. Muse of Fire: Why The Technology, on What Happens to Wartime Innovations When the War is Over U. S. Military Forgets What It Learns in War, U. S. Army Asymmetric Warfare Group The Shot in the Dark: A. History of the, Global Power Competition His Writing has Appeared in Russian Analytical Digest The First Comprehensive Overview of A. Unit That Helped the Army Adapt to the Post-9/11 Era of Counterinsurgency, The New Atlantis Triple Helix, War on the Rocks Fare Forward, Science Before Receiving A. Phd in Moral Theology From Notre Dame He has Published Widely on Bioethics, Technology Ethics He is the Author of Science Religion, Christian Ethics, Anxiety Tomorrow’S. Troubles: Risk, Prudence in an Age of Algorithmic Governance, The Ethics of Precision Medicine & Encountering Artificial Intelligence - 2025 - Journal of Military Ethics 23 (3):245-250.
    This essay introduces this special issue on virtue ethics in relation to military AI. It describes the current situation of military AI ethics as following that of AI ethics in general, caught between consequentialism and deontology. Virtue ethics serves as an alternative that can address some of the weaknesses of these dominant forms of ethics. The essay describes how the articles in the issue exemplify the value of virtue-related approaches for these questions, before ending with thoughts for further research.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis.Youjin Kong - 2022 - Facct: Proceedings of the Acm Conference on Fairness, Accountability, and Transparency:485-494.
    A growing number of studies on fairness in artificial intelligence (AI) use the notion of intersectionality to measure AI fairness. Most of these studies take intersectional fairness to be a matter of statistical parity among intersectional subgroups: an AI algorithm is “intersectionally fair” if the probability of the outcome is roughly the same across all subgroups defined by different combinations of the protected attributes. This paper identifies and examines three fundamental problems with this dominant interpretation of intersectional fairness in AI. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31. (Un)Fairness in AI: An Intersectional Feminist Analysis.Youjin Kong - 2022 - Blog of the American Philosophical Association, Women in Philosophy Series.
    Racial, Gender, and Intersectional Biases in AI / -/- Dominant View of Intersectional Fairness in the AI Literature / -/- Three Fundamental Problems with the Dominant View / 1. Overemphasis on Intersections of Attributes / 2. Dilemma between Infinite Regress and Fairness Gerrymandering / 3. Narrow Understanding of Fairness as Parity / -/- Rethinking AI Fairness: from Weak to Strong Fairness.
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  83
    Biden’s Executive Order on AI and the E.U.’s AI Act: A Comparative Computer-Ethical Analysis.Manuel Wörsdörfer - 2024 - Philosophy and Technology 37 (3):1-27.
    AI (ethics) initiatives are essential in bringing about fairer, safer, and more trustworthy AI systems. Yet, they also come with various drawbacks, including a lack of effective governance mechanisms, window-dressing, and ‘ethics shopping.’ To address those concerns, hard laws are necessary, and more and more countries are moving in this direction. Two of the most notable recent legislations include the Biden Administration’s Executive Order (EO) on AI and the E.U.’s AI Act (AIA). While several scholarly articles have evaluated the strengths (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  33. (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  34.  56
    The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  69
    Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existential – risks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from AI. The quality of such approaches can be assessed by (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  36.  16
    Rethinking The Replacement of Physicians with AI.Hanhui Xu & Kyle Michael James Shuttleworth - 2025 - American Philosophical Quarterly 62 (1):17-31.
    The application of AI in healthcare has dramatically changed the practice of medicine. In particular, AI has been implemented in a variety of roles that previously required human physicians. Due to AI's ability to outperform humans in these roles, the concern has been raised that AI will completely replace human physicians in the future. In this paper, it is argued that human physician's ability to embellish the truth is necessary to prevent injury or grief to patients, or to protect patients’ (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  31
    Architectural Approach to Design of Emotional Intelligent Systems.Александра Викторовна Шиллер & Олег Эдуардович Петруня - 2021 - Russian Journal of Philosophical Sciences 64 (1):102-115.
    Over the past decades, due to the course towards digitalization of all areas of life, interest in modeling and creating intelligent systems has increased significantly. However, there are now a stagnation in the industry, a lack of attention to analog and bionic approaches as alternatives to digital, numerous speculations on “neuro” issues for commercial and other purposes, and an increase in social and environmental risks. The article provides an overview of the development of artificial intelligence (AI) conceptions toward increasing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  81
    The irresponsibility of not using AI in the military.M. Postma, E. O. Postma, R. H. A. Lindelauf & H. W. Meerveld - 2023 - Ethics and Information Technology 25 (1):1-6.
    The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  83
    The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  46
    Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda - 2025 - AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41. Consciousness as computation: A defense of strong AI based on quantum-state functionalism.R. Michael Perry - 2006 - In Charles Tandy, Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences of strong AI are (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  37
    Exploring Factors of the Willingness to Accept AI-Assisted Learning Environments: An Empirical Investigation Based on the UTAUT Model and Perceived Risk Theory.Wentao Wu, Ben Zhang, Shuting Li & Hehai Liu - 2022 - Frontiers in Psychology 13.
    Artificial intelligence technology has been widely applied in many fields. AI-assisted learning environments have been implemented in classrooms to facilitate the innovation of pedagogical models. However, college students' willingness to accept AI-assisted learning environments has been ignored. Exploring the factors that influence college students' willingness to use AI can promote AI technology application in higher education. Based on the Unified Theory of Acceptance and Use of Technology and the theory of perceived risk, this study identified six factors that influence students' (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  47
    Islamization of Knowledge: A Comparative Analysis of the Conceptions of AI-Attas and AI- Fārūqī.Rosnani Hashim & Imron Rossidy - 2000 - Intellectual Discourse 8 (1).
    There has been a lot of discussion and debate on the issue of Islamization of Contemporary Knowledge among Muslim intellectuals. Two Muslim thinkers, namely al-Attas and al-Fārūqī were foremost in the attempt to conceptualise the problem of the Muslim Ummah and the issue of Islamization of knowledge as an epistemological and socio-political solution. This article aims to examine, compare and analyse the ideas of both scholars with respect to the various interpretations of the concept of Islamization of knowledge their definition (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. (1 other version)Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI. According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and (...)
    Direct download (14 more)  
     
    Export citation  
     
    Bookmark   1886 citations  
  45.  36
    Artificial Intelligence and Mind-reading Machines— Towards a Future Techno-Panoptic Singularity.Aura Elena Schussler - 2020 - Postmodern Openings 11 (4):334-346.
    The present study focuses on the situation in which mind-reading machines will be connected, initially through the incorporation of weak AI, and then in conjunction to strong AI, an aspect that, ongoing, will no longer have a simple medical role, as is the case at present, but one of surveillance and monitoring of individuals—an aspect that is heading us towards a future techno-panoptic singularity. Thus, the general objective of this paper raises the problem of the ontological stability of human (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Do Large Language Models Hallucinate Electric Fata Morganas?Kristina Šekrst - forthcoming - Journal of Consciousness Studies.
    This paper explores the intersection of AI hallucinations and the question of AI consciousness, examining whether the erroneous outputs generated by large language models (LLMs) could be mistaken for signs of emergent intelligence. AI hallucinations, which are false or unverifiable statements produced by LLMs, raise significant philosophical and ethical concerns. While these hallucinations may appear as data anomalies, they challenge our ability to discern whether LLMs are merely sophisticated simulators of intelligence or could develop genuine cognitive processes. By analyzing the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  31
    Artificial Intelligence: The Opacity of Concepts in the Uncertainty of Realities.Александр Иванович Агеев - 2022 - Russian Journal of Philosophical Sciences 65 (1):27-43.
    The development of the systems of artificial intelligence (AI) and digital transformation in general lead to the formation of multitude of autonomous agents of artificial and mixed genealogy, as well as to complex structures in the information and regulatory environment with many opportunities and pathologies and a growing level of uncertainty in making managerial decisions. The situation is complicated by the continuing plurality of understanding of the essence of AI systems. The modern expanded understanding of AI goes back to ideas (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  61
    Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49. (1 other version)Chinese room argument.Larry Hauser - 2001 - Internet Encyclopedia of Philosophy.
    The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle (...)
     
    Export citation  
     
    Bookmark   3 citations  
  50.  22
    Theory languages in designing artificial intelligence.Pertti Saariluoma & Antero Karvonen - 2024 - AI and Society 39 (5):2249-2258.
    The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 973