About this topic
Summary See the category "Philosophy of Artificial Intelligence"
Key works See the category "Philosophy of Artificial Intelligence" for key works.
Introductions See the category "Philosophy of Artificial Intelligence" for introductions.
Related

Contents
148 found
Order:
1 — 50 / 148
  1. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of advanced foundation (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. LLMs Can Never Be Ideally Rational.Simon Goldstein - manuscript
    LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. AI-aesthetics and the artificial author.Emanuele Arielli - forthcoming - Proceedings of the European Society for Aesthetics.
    ABSTRACT. Consider this scenario: you discover that an artwork you greatly admire, or a captivating novel that deeply moved you, is in fact the product of artificial intelligence, not a human’s work. Would your aesthetic judgment shift? Would you perceive the work differently? If so, why? The advent of artificial intelligence (AI) in the realm of art has sparked numerous philosophical questions related to the authorship and artistic intent behind AI-generated works. This paper explores the debate between viewing AI as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Effort in Aesthetic Appreciation: from Avant-Garde to AI.Emanuele Arielli - forthcoming - Proceeding of the European Society of Aesthetics.
    This paper starts from the debates on whether the seemingly effortless creation of AI artworks, and by extension some Avant-Garde pieces, diminishes their artistic value. This leads to a broader inquiry into how effort, or the lack thereof, influences our perception of an artwork’s quality and significance. Traditionally, effort in art has been seen in two ways. On the one hand, a skilled artist’s work, which may appear effortless, is often valued for its apparent ease, reflecting genius or inspiration. On (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Human Perception and The Artificial Gaze.Emanuele Arielli & Lev Manovich - forthcoming - In Emanuele Arielli & Lev Manovich (eds.), Artificial Aesthetics.
  9. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. The fetish of artificial intelligence. In response to Iason Gabriel’s “Towards a Theory of Justice for Artificial Intelligence”.Albert Efimov - forthcoming - Philosophy Science.
    The article presents the grounds for defining the fetish of artificial intelligence (AI). The fundamental differences of AI from all previous technological innovations are highlighted, as primarily related to the introduction into the human cognitive sphere and fundamentally new uncontrolled consequences for society. Convincing arguments are presented that the leaders of the globalist project are the main beneficiaries of the AI fetish. This is clearly manifested in the works of philosophers close to big technology corporations and their mega-projects. It is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. ChatGPT, Education, and Understanding.Federica Isabella Malfatti - forthcoming - Social Epistemology.
    Is ChatGPT a good teacher? Or could it be? As understanding is widely acknowledged as one of the fundamental aims of education, the answer to these questions depends on whether ChatGPT fosters or could foster the acquisition of understanding in its users. In this paper, I tackle this issue in two steps. In the first part of the paper, I explore and analyze the set of skills and social-epistemic virtues that a teacher must exemplify to perform her job well – (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18. ChatGPT and the Writing of Philosophical Essays.Markus Bohlmann & Annika M. Berger - 2024 - Teaching Philosophy 47 (2):233-253.
    Text-generative AI-systems have become important semantic agents with ChatGPT. We conducted a series of experiments to learn what teachers’ conceptions of text-generative AI are in relation to philosophical texts. In our main experiment, using mixed methods, we had twenty-four high school students write philosophical essays, which we then randomized to essays with the same command from ChatGPT. We had ten prospective teachers assess these essays. They were able to tell whether it was an AI or student essay with 78.7 percent (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Are Artificial Neurons Neurons?Johannes Brinz - 2024 - In Yannic Kappes, Asya Passinsky, Julio De Rizzo & Benjamin Schnieder (eds.), Facets of Reality — Contemporary Debates. Beiträge der Österreichischen Ludwig Wittgenstein Gesellschaft / Contributions of the Austrian Ludwig Wittgenstein Society. Band / Vol. XXX. Austrian Ludwig Wittgenstein Society. pp. 98-107.
    The media often discuss artificial neural networks like ChatGPT or Amazon's Alexa, and policymakers grapple with regulating emerging technologies. However, the precise nature of "artificial neurons" remains ambiguous. Is this term to be understood merely metaphorically or does it refer to physical entities resembling biological neurons? While commonly understood as mathematical nodes in AI, the discussion extends deeper, particularly with the advent of neuromorphic engineering. This paper discusses whether artificial neurons are indeed neurons and what the potential implications are. Specifically, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  21. Perspectives on Spiritual Intelligence.Marius Dorobantu & Fraser Watts (eds.) - 2024 - Routledge.
    The topic of intelligence involves questions that cut deep into ultimate concerns and human identity, and the study of intelligence is ideal ground for dialogue between science and religion. This volume investigates the notion of spiritual intelligence (SI) from a variety of perspectives, bringing together contributions from theology, philosophy, artificial intelligence, computer science, linguistics, psychology, biology, and cognitive science. It considers a definition of SI as "processing things differently, not processing different things" and aims to describe SI in naturalistic terms. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Review of the panel discussion “Philosophical and Ethical Analysis of the Concepts of Death and Human Existence in the Context of Cybernetic Immortality”, Samara, 29.03 2024.Oleg Gurov - 2024 - Artificial Societies 19 (2).
    This publication constitutes a comprehensive account of the panel discussion entitled “Philosophical and Ethical Analysis of the Concepts of Death and Human Existence in the Context of Cybernetic Immortality” which transpired within the confines of the international scientific symposium “The Seventh Lemovsky Readings” held in Samara from March 28th to 30th, 2024. The aforementioned panel discussion, which congregated scores of erudite scholars representing preeminent research institutions across the Russian Federation, emerged as one of the cardinal events of the conference. Eminent (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
  25. The Simulation Hypothesis, Social Knowledge, and a Meaningful Life.Grace Helton - 2024 - Oxford Studies in Philosophy of Mind 4:447-60.
    In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are not genuine (and so not knowable), then life in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Chess AI does not know chess - The death of Type B strategy and its philosophical implications.Spyridon Kakos - 2024 - Harmonia Philosophica Articles.
    Playing chess is one of the first sectors of human thinking that were conquered by computers. From the historical win of Deep Blue against chess champion Garry Kasparov until today, computers have completely dominated the world of chess leaving no room for question as to who is the king in this sport. However, the better computers become in chess the more obvious their basic disadvantage becomes: Even though they can defeat any human in chess and play phenomenally great and intuitive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values and intentions. Astrology has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Artificial Intelligence and an Anthropological Ethics of Work: Implications on the Social Teaching of the Church.Justin Nnaemeka Onyeukaziri - 2024 - Religions 15 (5):623.
    It is the contention of this paper that ethics of work ought to be anthropological, and artificial intelligence (AI) research and development, which is the focus of work today, should be anthropological, that is, human-centered. This paper discusses the philosophical and theological implications of the development of AI research on the intrinsic nature of work and the nature of the human person. AI research and the implications of its development and advancement, being a relatively new phenomenon, have not been comprehensively (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  31. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Are generics and negativity about social groups common on social media? A comparative analysis of Twitter (X) data.Uwe Peters & Ignacio Ojea Quintana - 2024 - Synthese 203 (6):1-22.
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested that generics (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. The Turing Test.Diane Proudfoot - 2024 - MIT Open Encyclopedia of Cognitive Science.
  34. Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think.Mir H. S. Quadri - 2024 - Arkinfo Notes.
    The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need for (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35. Does the no miracles argument apply to AI?Darrell P. Rowbottom, William Peden & André Curtis-Trudel - 2024 - Synthese 203 (173):1-20.
    According to the standard no miracles argument, science’s predictive success is best explained by the approximate truth of its theories. In contemporary science, however, machine learning systems, such as AlphaFold2, are also remarkably predictively successful. Thus, we might ask what best explains such successes. Might these AIs accurately represent critical aspects of their targets in the world? And if so, does a variant of the no miracles argument apply to these AIs? We argue for an affirmative answer to these questions. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Illusory world skepticism.Susan Schneider - 2024 - Philosophy and Phenomenological Research 109 (3):1049-1057.
    l argue that, contra Chalmers,a skeptical scenario involving deception is a genuine possibility,even if he is correct that simulations are real. I call this new skeptical position “Illusory World Skepticism.” Illusory World Skepticism draws from the simulation argument,together with work in physics,astrobiology, and AI,to argue that we may indeed be in an illusory world—a universe scale simulation orchestrated by a deceptive AI—the technophilosopher’s ultimate evil demon. In Section One I urge that Illusory World Skepticism is a bone fide skeptical possibility. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Intelligence, from Natural Origins to Artificial Frontiers - Human Intelligence vs. Artificial Intelligence.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  39. Outlining a Protohistory of Artificial Intelligence and Music: from Antiquity to Nineteenth Century.Ivano Zanzarella - 2024 - Rivista di Studi Politici 1:99-141.
  40. On human centered artificial intelligence. [REVIEW]Gloria Andrada - 2023 - Metascience.
  41. CG-Art: An Aesthetic Discussion of the Relationship Between Artistic Creativity and Computation.Leonardo Arriagada - 2023 - Dissertation, University of Groningen
    This research examines how computer-generated art (CG-art) is reshaping the notion of artistic creativity in the current age of Artificial Intelligence (AI). In this context, this study proposes to refine the concept of CG-art by delimiting what an AI-generated artwork is. This new term has its roots in Cognitive Science, Aesthetics and Computer Science, emphasising their intersections. It involves the conjunction of three elements: (1) an autonomous AI-production of a new and surprising idea or artefact, (2) which passes an internal (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  43. AI Assertion.Patrick Butlin & Emanuel Viebahn - 2023 - Ergo: An Open Access Journal of Philosophy.
    Modern generative AI systems have shown the capacity to produce remarkably fluent language, prompting debates both about their semantic understanding and, less prominently, about whether they can perform speech acts. This paper addresses the latter question, focusing on assertion. We argue that to be capable of assertion, an entity must meet two requirements: it must produce outputs with descriptive functions, and it must be capable of being sanctioned by agents with which it interacts. The second requirement arises from the nature (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. La scorciatoia.Nello Cristianini - 2023 - Bologna: Il Mulino.
    La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Наука и феномен человека в эпоху цивилизационного Макросдвига.Albert Efimov (ed.) - 2023 - Moscow:
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  47. What’s Stopping Us Achieving AGI?Albert Efimov - 2023 - Philosophy Now 3 (155):20-24.
    A. Efimov, D. Dubrovsky, and F. Matveev explore limitations on the development of AI presented by the need to understand language and be embodied.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Chess and Antirealism.Samuel Kahn - 2023 - Asian Journal of Philosophy 2 (76):1-20.
    In this article, I make a novel argument for scientific antirealism. My argument is as follows: (1) the best human chess players would lose to the best computer chess programs; (2) if the best human chess players would lose to the best computer chess programs, then there is good reason to think that the best human chess players do not understand how to make winning moves; (3) if there is good reason to think that the best human chess players do (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  50. Humans in the meta-human era (Meta-philosophical analysis).Spyridon Kakos - 2023 - Harmonia Philosophica Papers.
    Humans are obsolete. In the post-ChatGPT era, artificial intelligence systems have replaced us in the last sectors of life that we thought were our personal kingdom. Yet, humans still have a place in this life. But they can find it only if they forget all those things that we believe make us unique. Only if we go back to doing nothing, can we truly be alive and meet our Self. Only if we stop thinking can we accept the Cosmos as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 148