Results for 'Dangers of AI'

981 found
Order:
  1.  39
    Magical thinking and the test of humanity: we have seen the danger of AI and it is us.David Morris - 2024 - AI and Society 39 (6):3047-3049.
  2. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Are editors of flesh and blood necessary for meeting yet another danger with AI?Johan Gamper - manuscript
    As a writer, it is hard to defend oneself from the accusation of being a robot. Even though the argument is ad hominem it perhaps is too difficult to create a “reversed” Turing test. It is suggested in this article that editors of flesh and blood still are necessary.
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  63
    Pathologies of AI: Responsible use of artificial intelligence in professional work. [REVIEW]Ronald Stamper - 1988 - AI and Society 2 (1):3-16.
    Although the AI paradigm is useful for building knowledge-based systems for the applied natural sciences, there are dangers when it is extended into the domains of business, law and other social systems. It is misleading to treat knowledge as a commodity that can be separated from the context in which it is regularly used. Especially when it relates to social behaviour, knowledge should be treated as socially constructed, interpreted and maintained through its practical use in context. The meanings of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  5.  43
    Who's really afraid of AI?: Anthropocentric bias and postbiological evolution.Milan M. Ćirković - 2022 - Belgrade Philosophical Annual 35:17-29.
    The advent of artificial intelligence (AI) systems has provoked a lot of discussions in both epistemological, bioethical and risk-analytic terms, much of it rather paranoid in nature. Unless one takes an extreme anthropocentric and chronocentric stance, this process can be safely regarded as part and parcel of the sciences of the origin. In this contribution, I would like to suggest that at least four different classes of arguments could be brought forth against the proposition that AI - either human-level or (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  1
    “Empathy Code”: The Dangers of Automating Empathy in Business.Nicola Thomas & Niall Docherty - forthcoming - Business and Society.
    Organizations are increasingly adopting “Emotional AI” to monitor and influence employee emotions, aiming to create more empathetic workplaces. However, we argue that automating empathy risks fostering empathy skepticism, alienating employees, exacerbating mental health issues, and eroding trust. We call on organizations to address the root causes of negative workplace emotions and leverage AI as a tool to complement—rather than replace—empathy, fostering workplaces that genuinely prioritize care and trust.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  34
    Dutch Comfort: The Limits of AI Governance through Municipal Registers.Corinne Cath & Fieke Jansen - 2022 - Techné Research in Philosophy and Technology 26 (3):395-412.
    In this commentary, we respond to the editorial letter by Professor Luciano Floridi entitled “AI as a public service: Learning from Amsterdam and Helsinki.” Here, Floridi considers the positive impact of municipal AI registers, which collect a limited number of algorithmic systems used by the city of Amsterdam and Helsinki. We question a number of assumptions about AI registers as a governance model for automated systems. We start with recent attempts to normalize AI by decontextualizing and depoliticizing it, which is (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8.  7
    Striking the balance: ethical challenges and social implications of AI-induced power shifts in healthcare organizations.Martin Hähnel, Sabine Pfeiffer & Stephan Graßmann - forthcoming - AI and Society:1-18.
    The emergence of new digital technologies in modern work organizations is also changing the way employees and employers communicate, design work processes and responsibilities, and delegate. This paper takes an interdisciplinary—namely sociological and philosophical—perspective on the use of AI in healthcare work organizations. Using this example, structural power relations in modern work organizations are first examined from a sociological perspective, and it is shown how these structural power relations, decision-making processes, and areas of responsibility shift when AI is used. In (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Companion robots: the hallucinatory danger of human-robot interactions.Piercosma Bisconti & Daniele Nardi - 2018 - In Piercosma Bisconti & Daniele Nardi (eds.), AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. pp. 17-22.
    The advent of the so-called Companion Robots is raising many ethical concerns among scholars and in the public opinion. Focusing mainly on robots caring for the elderly, in this paper we analyze these concerns to distinguish which are directly ascribable to robotic, and which are instead preexistent. One of these is the “deception objection”, namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behaviors. We argue on the inconsistency of this charge, as today formulated. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  57
    How AI can be surprisingly dangerous for the philosophy of mathematics— and of science.Walter Carnielli - 2021 - Circumscribere: International Journal for the History of Science 27:1-12.
    In addition to the obvious social and ethical risks, there are philosophical hazards behind artificial intelligence and machine learning. I try to raise here some critical points that might counteract some naive optimism, and warn against the possibility that synthetic intelligence may surreptitiously influence the agenda of science before we can realize it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. From socrates to expert systems: The limits and dangers of calculative rationality.Hubert L. Dreyfus - 1985 - In Carl Mitcham & Alois Huning (eds.), Philosophy and Technology II: Information Technology and Computers in Theory and Practice. Reidel.
    Actual AI research began auspiciously around 1955 with Allen Newell and Herbert Simon's work at the RAND Corporation. Newell and Simon proved that computers could do more than calculate. They demonstrated that computers were physical symbol systems whose symbols could be made to stand for anything, including features of the real world, and whose programs could be used as rules for relating these features. In this way computers could be used to simulate certain important aspects intelligence. Thus the information-processing model (...)
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  12.  61
    Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers.Robert Sparrow - 2024 - AI and Society 39 (5):2439-2444.
    When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  13.  28
    Imagining and governing artificial intelligence: the ordoliberal way—an analysis of the national strategy ‘AI made in Germany’.Jens Hälterlein - forthcoming - AI and Society:1-12.
    National Artificial Intelligence (AI) strategies articulate imaginaries of the integration of AI into society and envision the governing of AI research, development and applications accordingly. To integrate these central aspects of national AI strategies under one coherent perspective, this paper presented an analysis of Germany’s strategy ‘AI made in Germany’ through the conceptual lens of ordoliberal political rationality. The first part of the paper analyses how the guiding vision of a human-centric AI not only adheres to ethical and legal principles (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  39
    Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions.Fabio Paglieri - 2024 - Philosophy and Technology 37 (2):1-30.
    This paper discusses some societal implications of the most recent and publicly discussed application of advanced machine learning techniques: generative AI models, such as ChatGPT (text generation) and DALL-E (text-to-image generation). The aim is to shift attention away from conceptual disputes, e.g. regarding their level of intelligence and similarities/differences with human performance, to focus instead on practical problems, pertaining the impact that these technologies might have (and already have) on human societies. After a preliminary clarification of how generative AI works (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  30
    Public perception of military AI in the context of techno-optimistic society.Eleri Lillemäe, Kairi Talves & Wolfgang Wagner - forthcoming - AI and Society:1-15.
    In this study, we analyse the public perception of military AI in Estonia, a techno-optimistic country with high support for science and technology. This study involved quantitative survey data from 2021 on the public’s attitudes towards AI-based technology in general, and AI in developing and using weaponised unmanned ground systems (UGS) in particular. UGS are a technology that has been tested in militaries in recent years with the expectation of increasing effectiveness and saving manpower in dangerous military tasks. However, developing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16. AI Dangers: Imagined and Real.Devdatt Dubhashi & Shalom Lappin - 2017 - Communications of the Acm 60 (2):43--45.
    No categories
     
    Export citation  
     
    Bookmark   2 citations  
  17. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18.  48
    Knowledge mining and social dangerousness assessment in criminal justice: metaheuristic integration of machine learning and graph-based inference.Nicola Lettieri, Alfonso Guarino, Delfina Malandrino & Rocco Zaccagnino - 2023 - Artificial Intelligence and Law 31 (4):653-702.
    One of the main challenges for computational legal research is drawing up innovative heuristics to derive actionable knowledge from legal documents. While a large part of the research has been so far devoted to the extraction of purely legal information, less attention has been paid to seeking out in the texts the clues of more complex entities: legally relevant facts whose detection requires to link and interpret, as a unified whole, legal information and results of empirical analyses. This paper presents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  7
    Protecting society from AI misuse: when are restrictions on capabilities warranted?Markus Anderljung, Julian Hazell & Moritz von Knebel - forthcoming - AI and Society:1-17.
    Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to help automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  17
    Why AI undermines democracy and what to do about it.Mark Coeckelbergh - 2024 - Cambridge: Polity Press.
    Across the world, Artificial Intelligence (AI) is being used as a tool for political manipulation and totalitarian repression. Stories about AI are often stories of polarization, discrimination, surveillance, and oppression. Is democracy in danger? And can we do anything about it? In this compelling book, Mark Coeckelbergh offers a guide to the key risks posed by AI for democracy. He argues that AI, as it is currently used and developed, not only aids totalitarian regimes but also undermines the fundamental principles (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  62
    The technological change of reality: Opportunities and dangers.Wolfgang Bibel - 1989 - AI and Society 3 (2):117-132.
    This essay discusses the trade-off between the opportunities and the dangers involved in technological change. It is argued that Artificial Intelligence technology, if properly used, could contribute substantially to coping with some of the major problems the world faces because of the highly complex interconnectivity of modern human society.In order to lay the foundation for the discussion, the symptoms of general unease which are associated with current technological progress, the concept of reality, and the field of Artificial Intelligence are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  18
    Navigating the uncommon: challenges in applying evidence-based medicine to rare diseases and the prospects of artificial intelligence solutions.Olivia Rennie - 2024 - Medicine, Health Care and Philosophy 27 (3):269-284.
    The study of rare diseases has long been an area of challenge for medical researchers, with agonizingly slow movement towards improved understanding of pathophysiology and treatments compared with more common illnesses. The push towards evidence-based medicine (EBM), which prioritizes certain types of evidence over others, poses a particular issue when mapped onto rare diseases, which may not be feasibly investigated using the methodologies endorsed by EBM, due to a number of constraints. While other trial designs have been suggested to overcome (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  87
    Conservative AI and social inequality: conceptualizing alternatives to bias through social theory.Mike Zajko - 2021 - AI and Society 36 (3):1047-1056.
    In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist’s view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an understanding of the ways (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  26. The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  28.  4
    AI diagnoses terminal illness care limits: just, or just stingy?Leonard Michael Fleck - 2024 - Journal of Medical Ethics 50 (12):818-819.
    I agree with Jecker et al that “the headline-grabbing nature of existential risk (X-risk) diverts attention away from immediate artificial intelligence (AI) threats…”1 Focusing on very long-term speculative risks associated with AI is both ethically distracting and ethically dangerous, especially in a healthcare context. More specifically, AI in healthcare is generating healthcare justice challenges that are real, imminent and pervasive. These are challenges generated by AI that deserve immediate ethical attention, more than any X-risk issues in the distant future. Almost (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body be allowed to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  53
    We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  31.  72
    Shutdown-seeking AI.Simon Goldstein & Pamela Robinson - forthcoming - Philosophical Studies:1-13.
    We propose developing AIs whose only final goal is being shut down. We argue that this approach to AI safety has three benefits: (i) it could potentially be implemented in reinforcement learning, (ii) it avoids some dangerous instrumental convergence dynamics, and (iii) it creates trip wires for monitoring dangerous capabilities. We also argue that the proposal can overcome a key challenge raised by Soares et al. (2015), that shutdown-seeking AIs will manipulate humans into shutting them down. We conclude by comparing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  12
    The selfish machine? On the power and limitation of natural selection to understand the development of advanced AI.Maarten Boudry & Simon Friederich - forthcoming - Philosophical Studies:1-24.
    Some philosophers and machine learning experts have speculated that superintelligent Artificial Intelligences (AIs), if and when they arrive on the scene, will wrestle away power from humans, with potentially catastrophic consequences. Dan Hendrycks has recently buttressed such worries by arguing that AI systems will undergo evolution by natural selection, which will endow them with instinctive drives for self-preservation, dominance and resource accumulation that are typical of evolved creatures. In this paper, we argue that this argument is not compelling as it (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33.  36
    What dangers lurk in the development of emotionally competent artificial intelligence, especially regarding the trend towards sex robots? A review of Catrin Misselhorn’s most recent book.Janina Luise Samuel & André Schmiljun - 2023 - AI and Society 38 (6):2717-2721.
    The discussion around artificial empathy and its ethics is not a new one. This concept can be found in classic science fiction media such as Star Trek and Blade Runner and is also pondered on in more recent interactive media such as the video game Detroit: Become Human. In most depictions, emotions and empathy are presented as the key to being human. Misselhorn's new publication shows that these futuristic stories are becoming more and more relevant today. We must ask ourselves (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  13
    AI and the falling sky: interrogating X-Risk.Nancy S. Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky & Anita Ho - 2024 - Journal of Medical Ethics 50 (12):811-817.
    The Buddhist Jātaka tells the tale of a hare lounging under a palm tree who becomes convinced the Earth is coming to an end when a ripe bael fruit falls on its head. Soon all the hares are running; other animals join them, forming a stampede of deer, boar, elk, buffalo, wild oxen, rhinoceros, tigers and elephants, loudly proclaiming the earth is ending.1 In the American retelling, the hare is ‘chicken little,’ and the exaggerated fear is that the sky is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  35.  43
    Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  40
    AI-based healthcare: a new dawn or apartheid revisited?Alice Parfett, Stuart Townley & Kristofer Allerfeldt - 2021 - AI and Society 36 (3):983-999.
    The Bubonic Plague outbreak that wormed its way through San Francisco’s Chinatown in 1900 tells a story of prejudice guiding health policy, resulting in enormous suffering for much of its Chinese population. This article seeks to discuss the potential for hidden “prejudice” should Artificial Intelligence (AI) gain a dominant foothold in healthcare systems. Using a toy model, this piece explores potential future outcomes, should AI continue to develop without bound. Where potential dangers may lurk will be discussed, so that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  31
    The carousel of ethical machinery.Luís Moniz Pereira - 2021 - AI and Society 36 (1):185-196.
    Human beings have been aware of the risks associated with knowledge or its associated technologies since the dawn of time. Not just in Greek mythology, but in the founding myths of Judeo-Christian religions, there are signs and warnings against these dangers. Yet, such warnings and forebodings have never made as much sense as they do today. This stems from the emergence of machines capable of cognitive functions performed exclusively by humans until recently. Besides those technical problems associated with its (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38.  31
    The Human Roots of Artificial Intelligence: A Commentary on Susan Schneider's Artificial You.Inês Hipólito - 2024 - Philosophy East and West 74 (2):297-305.
    In lieu of an abstract, here is a brief excerpt of the content:The Human Roots of Artificial Intelligence:A Commentary on Susan Schneider's Artificial YouInês Hipólito (bio)Technologies are not mere tools waiting to be picked up and used by human agents, but rather are material-discursive practices that play a role in shaping and co-constituting the world in which we live.Karen BaradIntroductionSusan Schneider's book Artificial You: AI and the Future of Your Mind presents a compelling and bold argument regarding the potential impact (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  73
    Artificial intelligence, or the mechanization of work.Edward S. Reed - 1987 - AI and Society 1 (2):138-143.
    AI is supposed to be a scientific research program for developing and analyzing computer-based systems that mimic natural psychological processes. I argue that this is a mere fiction, a convenient myth. In reality, AI is a technology for reorganizing the relations of production in workplaces, and specifically for increasing management control. The appeal of the AI myth thus serves as ideological justification for increasing managerial domination. By focusing on the AI myth, critics of AI are diverting themselves from the very (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   308 citations  
  41.  60
    We have to talk about emotional AI and crime.Lena Podoletz - 2023 - AI and Society 38 (3):1067-1082.
    Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  44. Artificial intelligence-based prediction of pathogen emergence and evolution in the world of synthetic biology.Antoine Danchin - 2024 - Microbial Biotechnology 17 (10):e70014.
    The emergence of new techniques in both microbial biotechnology and artificial intelligence (AI) is opening up a completely new field for monitoring and sometimes even controlling the evolution of pathogens. However, the now famous generative AI extracts and reorganizes prior knowledge from large datasets, making it poorly suited to making predictions in an unreliable future. In contrast, an unfamiliar perspective can help us identify key issues related to the emergence of new technologies, such as those arising from synthetic biology, whilst (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  2
    How ChatGPT Changed the Media’s Narratives on AI: A Semi-automated Narrative Analysis Through Frame Semantics.Igor Ryazanov, Carl Öhman & Johanna Björklund - 2024 - Minds and Machines 35 (1):1-24.
    We perform a mixed-method frame semantics-based analysis on a dataset of more than 49,000 sentences collected from 5846 news articles that mention AI. The dataset covers the twelve-month period centred around the launch of OpenAI’s chatbot ChatGPT and is collected from the most visited open-access English-language news publishers. Our findings indicate that during the six months succeeding the launch, media attention rose tenfold—from already historically high levels. During this period, discourse has become increasingly centred around experts and political leaders, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  61
    Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Art and Artificial Intelligence - Challenges and Dangers.Andrej Démuth - 2020 - Espes 9 (1):26-35.
    The ability to create and perceive art has long been understood as an exceptional human trait, which should differentiate us from the rest of the organisms or robots. However, with the uprising of cognitive sciences and information stemming from them, as well as the evolutionary biology, even the human being began to be understood as an organism following the evolutionarily and culturally obtained algorithms and evaluation processes. Even fragile and multidimensional phenomena like beauty, aesthetic experience or the good have lately (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  30
    Wittgenstein and Forms of Life: Constellation and Mechanism.Piergiorgio Donatelli - 2023 - Philosophies 9 (1):4.
    The notion of forms of life points to a crucial aspect of Wittgenstein’s philosophical approach that challenges an influential line in the philosophical tradition. He portrays intellectual activities in terms of a cohesion of things held together in linguistic scenes rooted in the lives of people and the facts of the world. The original inspiration with which Wittgenstein worked on this approach is still relevant today in the recent technological turn associated with AI. He attacked a conception that treated human (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  36
    Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarization.Ermelinda Rodilosso - 2024 - Philosophy and Technology 37 (2):1-21.
    Social media have undoubtedly changed our ways of living. Their presence concerns an increasing number of users (over 4,74 billion) and pervasively expands in the most diverse areas of human life. Marketing, education, news, data, and sociality are just a few of the many areas in which social media play now a central role. Recently, some attention toward the link between social media and political participation has emerged. Works in the field of artificial intelligence have already pointed out that there (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 981