About this topic
Summary See the category "Philosophy of Artificial Intelligence."
Key works See the category "Philosophy of Artificial Intelligence" for key works.
Introductions See the category "Philosophy of Artificial Intelligence" for introductions.
Related

Contents
197 found
Order:
1 — 50 / 197
  1. Revised: From Color, to Consciousness, toward Strong AI.Xinyuan Gu - manuscript
    This article cohesively discusses three topics, namely color and its perception, the yet-to-be-solved hard problem of consciousness, and the theoretical possibility of strong AI. First, the article restores color back into the physical world by giving cross-species evidence. Secondly, the article proposes a dual-field with function Q hypothesis (DFFQ) which might explain the ‘first-person point of view’ and so the hard problem of consciousness. Finally, the article discusses what DFFQ might bring to artificial intelligence and how it might allow strong (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Emotional A.I. research: The importance of data-philosophizing to account for cultural differences.Manh-Tung Ho - manuscript
    The discourse on emotional A.I., i.e., technologies that read, classify, identify human emotions, is currently dominated by Western ideas. Yet, even A.I. researchers in the West acknowledge there are cultural differences if neglected could magnify and affect A.I.'s accuracy.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans.Griffin Pithie - manuscript
    The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Can Word Models be World Models? Language as a Window onto the Conditional Structure of the World.Matthieu Queloz - manuscript
    LLMs are, in the first instance, models of the statistical distribution of tokens in the vast linguistic corpus they have been trained on. But their often surprising emergent capabilities raise the question of how much understanding of the extralinguistic world LLMs can glean from this statistical distribution of words alone. Here, I explore and evaluate the idea that the probability distribution of words in the public corpus offers a window onto the conditional structure of the world. To become a good (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. Can Word Models be World Models? Language as a Window onto the Conditional Structure of the World.Matthieu Queloz - manuscript
    LLMs are, in the first instance, models of the statistical distribution of tokens in the vast linguistic corpus they have been trained on. But their often surprising emergent capabilities raise the question of how much understanding of the extralinguistic world LLMs can glean from this statistical distribution of words alone. Here, I explore and evaluate the idea that the probability distribution of words in the public corpus offers a window onto the conditional structure of the world. To become a good (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  6. Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate progress (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Chatbot Epistemology.Susan Schneider - manuscript
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  9. Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Conditional and Modal Reasoning in Large Language Models.Wesley H. Holliday, Matthew Mandelkern & Cedegao Zhang - unknown - Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (Emnlp 2024).
    The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in AI and cognitive science. In this paper, we probe the extent to which twenty-nine LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., 'If Ann has a queen, then Bob has a jack') and epistemic modals (e.g., 'Ann might have an ace', 'Bob must have a king'). These inferences have been of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Explicit Legg-Hutter intelligence calculations which suggest non-Archimedean intelligence.Samuel Allen Alexander & Arthur Paul Pedersen - forthcoming - Lecture Notes in Computer Science.
    Are the real numbers rich enough to measure intelligence? We generalize a result of Alexander and Hutter about the so-called Legg-Hutter intelligence measures of reinforcement learning agents. Using the generalized result, we exhibit a paradox: in one particular version of the Legg-Hutter intelligence measure, certain agents all have intelligence 0, even though in a certain sense some of them outperform others. We show that this paradox disappears if we vary the Legg-Hutter intelligence measure to be hyperreal-valued rather than real-valued.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. Will AI avoid exploitation? Artificial general intelligence and expected utility theory.Adam Bales - forthcoming - Philosophical Studies:1-20.
    A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15. Experience replay algorithms and the function of episodic memory.Alexandria Boyle - forthcoming - In Lynn Nadel & Sara Aronowitz (eds.), Space, Time, and Memory. Oxford University Press.
    Episodic memory is memory for past events. It’s characteristically associated with an experience of ‘mentally replaying’ one’s experiences in the mind’s eye. This biological phenomenon has inspired the development of several ‘experience replay’ algorithms in AI. In this chapter, I ask whether experience replay algorithms might shed light on a puzzle about episodic memory’s function: what does episodic memory contribute to the cognitive systems in which it is found? I argue that experience replay algorithms can serve as idealized models of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Designing new Intelligent Machines (COMETT European Symposium, Liège April 1992).D. M. Dubois - forthcoming - Communication and Cognition-Artificial Intelligence.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  18. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Making AI Intelligible: Philosophical Foundations. By Herman Cappelen and Josh Dever. [REVIEW]Nikhil Mahant - forthcoming - Philosophical Quarterly.
    Linguistic outputs generated by modern machine-learning neural net AI systems seem to have the same contents—i.e., meaning, semantic value, etc.—as the corresponding human-generated utterances and texts. Building upon this essential premise, Herman Cappelen and Josh Dever's Making AI Intelligible sets for itself the task of addressing the question of how AI-generated outputs have the contents that they seem to have (henceforth, ‘the question of AI Content’). In pursuing this ambitious task, the book makes several high-level, framework observations about how a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   49 citations  
  21. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Transparencia, explicabilidad y confianza en los sistemas de aprendizaje automático.Andrés Páez - forthcoming - In Juan David Gutiérrez & Rubén Francisco Manrique (eds.), Más allá del algoritmo: oportunidades, retos y ética de la Inteligencia Artificial. Bogotá: Ediciones Uniandes.
    Uno de los principios éticos mencionados más frecuentemente en los lineamientos para el desarrollo de la inteligencia artificial (IA) es la transparencia algorítmica. Sin embargo, no existe una definición estándar de qué es un algoritmo transparente ni tampoco es evidente por qué la opacidad algorítmica representa un reto para el desarrollo ético de la IA. También se afirma a menudo que la transparencia algorítmica fomenta la confianza en la IA, pero esta aseveración es más una suposición a priori que una (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Some resonances between Eastern thought and Integral Biomathics in the framework of the WLIMES formalism for modelling living systems.Plamen L. Simeonov & Andree C. Ehresmann - forthcoming - Progress in Biophysics and Molecular Biology 131 (Special).
    Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different values as ideal, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Unveiling the Creation of AI-Generated Artworks: Broadening Worringerian Abstraction and Empathy Beyond Contemplation.Leonardo Arriagada - 2024 - Estudios Artísticos 10 (16):142-158.
    In his groundbreaking work, Abstraction and Empathy, Wilhelm Worringer delved into the intricacies of various abstract and figurative artworks, contending that they evoke distinct impulses in the human audience—specifically, the urges towards abstraction and empathy. This article asserts the presence of empirical evidence supporting the extension of Worringer’s concepts beyond the realm of art appreciation to the domain of art-making. Consequently, it posits that abstraction and empathy serve as foundational principles guiding the production of both abstract and figurative art. This (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Are Artificial Neurons Neurons?Johannes Brinz - 2024 - In Yannic Kappes, Asya Passinsky, Julio De Rizzo & Benjamin Schnieder (eds.), Facets of Reality — Contemporary Debates. Beiträge der Österreichischen Ludwig Wittgenstein Gesellschaft / Contributions of the Austrian Ludwig Wittgenstein Society. Band / Vol. XXX. Austrian Ludwig Wittgenstein Society. pp. 98-107.
    The media often discuss artificial neural networks like ChatGPT or Amazon's Alexa, and policymakers grapple with regulating emerging technologies. However, the precise nature of "artificial neurons" remains ambiguous. Is this term to be understood merely metaphorically or does it refer to physical entities resembling biological neurons? While commonly understood as mathematical nodes in AI, the discussion extends deeper, particularly with the advent of neuromorphic engineering. This paper discusses whether artificial neurons are indeed neurons and what the potential implications are. Specifically, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  29. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  30. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  31. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the criterion—a (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
  34. The Simulation Hypothesis, Social Knowledge, and a Meaningful Life.Grace Helton - 2024 - Oxford Studies in Philosophy of Mind 4:447-60.
    In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are not genuine (and so not knowable), then life in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Năm yếu tố tiền đề của tương tác giữa người và máy trong kỷ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - 2024 - Tạp Chí Thông Tin Và Truyền Thông 4 (4/2024):84-91.
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):365-381.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Inteligența, de la originile naturale la frontierele artificiale - Inteligența Umană vs. Inteligența Artificială.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    Istoria paralelă a evoluției inteligenței umane și a inteligenței artificiale este o călătorie fascinantă, evidențiind căile distincte, dar interconectate, ale evoluției biologice și inovației tehnologice. Această istorie poate fi văzută ca o serie de evoluții interconectate, fiecare progres în inteligența umană deschizând calea pentru următorul salt în inteligența artificială. Inteligența umană și inteligența artificială s-au împletit de mult timp, evoluând în traiectorii paralele de-a lungul istoriei. Pe măsură ce oamenii au căutat să înțeleagă și să reproducă inteligența, IA a apărut (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Intelligence, from Natural Origins to Artificial Frontiers - Human Intelligence vs. Artificial Intelligence.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Consilience and AI as technological prostheses.Jeffrey B. White - 2024 - AI and Society 39 (5):1-3.
    Edward Wilson wrote in Consilience that “Human history can be viewed through the lens of ecology as the accumulation of environmental prostheses” (1999 p 316), with technologies mediating our collective habitation of the Earth and its complex, interdependent ecosystems. Wilson emphasized the defining characteristic of complex systems, that they undergo transformations which are irreversible. His view is now standard, and his central point bears repeated emphasis, today: natural systems can be broken, species—including us—can disappear, ecosystems can fail, and technological prostheses (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. CG-Art: An Aesthetic Discussion of the Relationship Between Artistic Creativity and Computation.Leonardo Arriagada - 2023 - Dissertation, University of Groningen
    This research examines how computer-generated art (CG-art) is reshaping the notion of artistic creativity in the current age of Artificial Intelligence (AI). In this context, this study proposes to refine the concept of CG-art by delimiting what an AI-generated artwork is. This new term has its roots in Cognitive Science, Aesthetics and Computer Science, emphasising their intersections. It involves the conjunction of three elements: (1) an autonomous AI-production of a new and surprising idea or artefact, (2) which passes an internal (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. La scorciatoia.Nello Cristianini - 2023 - Bologna: Il Mulino.
    La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  46. Provocări în inteligența artificială.Nicolae Sfetcu - 2023 - It and C 2 (3):3-10.
    Inteligența artificială este un domeniu transformator care a captat atenția oamenilor de știință, inginerilor, întreprinderilor și guvernelor din întreaga lume. Pe măsură ce avansăm mai departe în secolul 21, au apărut câteva tendințe proeminente în domeniul IA. Inteligența artificială și tehnologia de învățare automată sunt utilizate în majoritatea aplicațiilor esențiale ale anilor 2020. Propunerile de control al capabilității inteligenței artificiale, denumite și în mod mai restrictiv confinarea IA, urmăresc să sporească posibilitatea de a monitoriza și controla comportamentul sistemelor IA, inclusiv (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. How far can we get in creating a digital replica of a philosopher?Anna Strasser, Eric Schwitzgebel & Matthew Crosby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions, Robophilosophy 2022. IOS Press. pp. 371-380.
    Can we build machines with which we can have interesting conversations? Observing the new optimism of AI regarding deep learning and new language models, we set ourselves an ambitious goal: We want to find out how far we can get in creating a digital replica of a philosopher. This project has two aims; one more technical, investigating of how the best model can be built. The other one, more philosophical, explores the limits and risks which are accompanied by the creation (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Trust and Explainable AI: Promises and Limitations.Sara Blanco - 2022 - Ethicomp Conference Proceedings.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 197