Results for 'Intelligent agents'

965 found
Order:
  1.  60
    Intelligent agents as innovations.Alexander Serenko & Brian Detlor - 2004 - AI and Society 18 (4):364-381.
    This paper explores the treatment of intelligent agents as innovations. Past writings in the area of intelligent agents focus on the technical merits and internal workings of agent-based solutions. By adopting a perspective on agents from an innovations point of view, a new and novel description of agents is put forth in terms of their degrees of innovativeness, competitive implications, and perceived characteristics. To facilitate this description, a series of innovation-based theoretical models are utilized (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  90
    Intelligent agents and liability: Is it a doctrinal problem or merely a problem of explanation? [REVIEW]Emad Abdel Rahim Dahiyat - 2010 - Artificial Intelligence and Law 18 (1):103-121.
    The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Socially Intelligent Agents-Towards a Science of Social Minds. Submitted to.K. Dautenhahn - forthcoming - Minds and Machines.
  4. Are intelligible agents square?Clea F. Rees - 2014 - Ethical Theory and Moral Practice 17 (1):17-34.
    In How We Get Along, J. David Velleman argues for two related theses: first, that ‘making sense’ of oneself to oneself and others is a constitutive aim of action; second, that this fact about action grounds normativity. Examining each thesis in turn, I argue against the first that an agent may deliberately act in ways which make sense in terms of neither her self-conception nor others' conceptions of her. Against the second thesis, I argue that some vices are such that (...)
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  5.  11
    Intelligent agent supporting human–multi-robot team collaboration.Ariel Rosenfeld, Noa Agmon, Oleg Maksimov & Sarit Kraus - 2017 - Artificial Intelligence 252 (C):211-231.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  58
    Intelligent agents and contracts: Is a conceptual rethink imperative? [REVIEW]Emad Abdel Rahim Dahiyat - 2007 - Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using (...) software agents. We will try further to examine how intelligent software agents differ from other software applications, and consider then how such differences are legally relevant. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide perspectives on how to deal with such issue. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7.  33
    Context for language understanding by intelligent agents.Marjorie McShane & Sergei Nirenburg - 2019 - Applied ontology 14 (4):415-449.
    This paper describes the layers of context leveraged by language-endowed intelligent agents (LEIAs) during incremental natural language understanding (NLU). Context is defined as a combination of (...
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  80
    Unplanned effects of intelligent agents on Internet use: a social informatics approach. [REVIEW]Alexander Serenko, Umar Ruhi & Mihail Cocosila - 2006 - AI and Society 21 (1):141-166.
    This paper instigates a discourse on the unplanned effects of intelligent agents in the context of their use on the Internet. By utilizing a social informatics framework as a lens of analysis, the study identifies several unanticipated consequences of using intelligent agents for information- and commerce-based tasks on the Internet. The effects include those that transpire over time at the organizational level, such as e-commerce transformation, operational encumbrance and security overload, as well as those that emerge (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9.  61
    EMIA: Emotion Model for Intelligent Agent.Krishna Asawa & Shikha Jain - 2015 - Journal of Intelligent Systems 24 (4):449-465.
    Emotions play a significant role in human cognitive processes such as attention, motivation, learning, memory, and decision making. Many researchers have worked in the field of incorporating emotions in a cognitive agent. However, each model has its own merits and demerits. Moreover, most studies on emotion focus on steady-state emotions than emotion switching. Thus, in this article, a domain-independent computational model of emotions for intelligent agent is proposed that have modules for emotion elicitation, emotion regulation, and emotion transition. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10. Modelling socially intelligent agents.Bruce Edmonds - manuscript
    The perspective of modelling agents rather than using them for a specificed purpose entails a difference in approach. In particular an emphasis on veracity as opposed to efficiency. An approach using evolving populations of mental models is described that goes some way to meet these concerns. It is then argued that social intelligence is not merely intelligence plus interaction but should allow for individual relationships to develop between agents. This means that, at least, agents must be able (...)
     
    Export citation  
     
    Bookmark   2 citations  
  11.  33
    (1 other version)Parameterizing mental model ascription across intelligent agents.Marjorie McShane - 2014 - Interaction Studies 15 (3):404-425.
    Mental model ascription – also called mindreading – is the process of inferring the mental states of others, which happens as a matter of course in social interactions. But although ubiquitous, mindreading is presumably a highly variable process: people mindread to different extents and with _different results._ We hypothesize that human mindreading ability relies on a large number of personal and contextual features: the inherent abilities of specific individuals, their current physical and mental states, their knowledge of the domain of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  12. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  13.  25
    Creative collaboration within heterogeneous human/intelligent agent teams.Christopher Kaczmarek - 2021 - Technoetic Arts 19 (3):269-281.
    As we move towards a world that is using machine learning and nascent artificial intelligence to analyse and, in many ways, guide most aspects of our lives, new forms of heterogeneous collaborative teams that include human/intelligent machine agents will become not just possible, but an inevitable part of our shared world. The conscious participation of the arts in the conversation about, and development and implementation of, these new collaborative possibilities is crucial, as the arts serve as our best (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  32
    Science with Artificially Intelligent Agents: The Case of Gerrymandered Hypotheses.Ioannis Votsis - unknown
    Barring some civilisation-ending natural or man-made catastrophe, future scientists will likely incorporate fully fledged artificially intelligent agents in their ranks. Their tasks will include the conjecturing, extending and testing of hypotheses. At present human scientists have a number of methods to help them carry out those tasks. These range from the well-articulated, formal and unexceptional rules to the semi-articulated rules-of-thumb and intuitive hunches. If we are to hand over at least some of the aforementioned tasks to artificially (...) agents, we need to find ways to make explicit and ultimately formal, not to mention computable, the more obscure of the methods that scientists currently employ with some measure of success in their inquiries. The focus of this talk is a problem for which the available solutions are at best semi-articulated and far from perfect. It concerns the question of how to conjecture new hypotheses or extend existing ones such that they do not save phenomena in gerrymandered or ad hoc ways. This talk puts forward a fully articulated formal solution to this problem by specifying what it is about the internal constitution of the content of a hypothesis that makes it gerrymandered or ad hoc. In doing so, it helps prepare the ground for the delegation of a full gamut of investigative duties to the artificially intelligent scientists of the future. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  15.  7
    Establishing Human Observer Criterion in Evaluating Artificial Social Intelligence Agents in a Search and Rescue Task.Lixiao Huang, Jared Freeman, Nancy J. Cooke, Myke C. Cohen, Xiaoyun Yin, Jeska Clark, Matt Wood, Verica Buchanan, Christopher Corral, Federico Scholcover, Anagha Mudigonda, Lovein Thomas, Aaron Teo & John Colonna-Romano - forthcoming - Topics in Cognitive Science.
    Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human–human teams, and human–artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents’ ability to infer participants’ knowledge training conditions and predict participants’ next victim type to be rescued. We evaluated ASI agents’ capabilities in three ways: (a) comparison to ground truth—the actual knowledge training condition and participant actions; (b) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17.  1
    Biography and Archive. A Lecture by Jorge Luis Borges as Transcribed by a Police Intelligence Agent.Patricia Funes - 2025 - Astrolabio: Nueva Época 34:1-25.
    El 28 de mayo de 1970, Jorge Luis Borges dictó la conferencia “Junín y la Conquista del Desierto” en esa la ciudad. Entre el público presente, un agente de los servicios de inteligencia local, escuchó, escribió y envió a sus superiores una versión de la conferencia y sus impresiones, registro singular que consta en el Archivo de la Dirección de Inteligencia de la Policía de la Provincia de Buenos Aires (DIPPBA). El objetivo del artículo es inscribir el documento en dos (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18.  14
    Integrating representation learning and skill learning in a human-like intelligent agent.Nan Li, Noboru Matsuda, William W. Cohen & Kenneth R. Koedinger - 2015 - Artificial Intelligence 219 (C):67-91.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  32
    Aliens in the Space of Reasons? On the Interaction Between Humans and Artificial Intelligent Agents.Bert Heinrichs & Sebastian Knell - 2021 - Philosophy and Technology 34 (4):1569-1580.
    In this paper, we use some elements of the philosophical theories of Wilfrid Sellars and Robert Brandom for examining the interactions between humans and machines. In particular, we adopt the concept of the space of reasons for analyzing the status of artificial intelligent agents. One could argue that AIAs, like the widely used recommendation systems, have already entered the space of reasons, since they seem to make knowledge claims that we use as premises for further claims. This, in (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  20
    Special issue on logics for intelligent agents and multi-agent systems.Mehmet A. Orgun, Guido Governatori, Chuchang Liu, Mark Reynolds & Abdul Sattar - 2011 - Journal of Applied Logic 9 (4):221-222.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  40
    Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?Philipp Schmidt & Sophie Loidolt - 2023 - Philosophy and Technology 36 (3):1-32.
    In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  49
    Flourishing Ethics and identifying ethical values to instill into artificially intelligent agents.Nesibe Kantar & Terrell Ward Bynum - 2022 - Metaphilosophy 53 (5):599-604.
    The present paper uses a Flourishing Ethics analysis to address the question of which ethical values and principles should be “instilled” into artificially intelligent agents. This is an urgent question that is still being asked seven decades after philosopher/scientist Norbert Wiener first asked it. An answer is developed by assuming that human flourishing is the central ethical value, which other ethical values, and related principles, can be used to defend and advance. The upshot is that Flourishing Ethics can (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  46
    Introduction to Special Issue: Mental model ascription by intelligent agents.Marjorie McShane - 2014 - Interaction Studies 15 (3):vii-xii.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  24
    Agents preserving privacy on intelligent transportation systems according to EU law.Javier Carbo, Juanita Pedraza & Jose M. Molina - forthcoming - Artificial Intelligence and Law:1-34.
    Intelligent Transportation Systems are expected to automate how parking slots are booked by trucks. The intrinsic dynamic nature of this problem, the need of explanations and the inclusion of private data justify an agent-based solution. Agents solving this problem act with a Believe Desire Intentions reasoning, and are implemented with JASON. Privacy of trucks becomes protected sharing a list of parkings ordered by preference. Furthermore, the process of assigning parking slots takes into account legal requirements on breaks and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  81
    Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?Jana Sedlakova & Manuel Trachsel - 2022 - American Journal of Bioethics 23 (5):4-13.
    Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  27.  61
    Normal = Normative? The role of intelligent agents in norm innovation.Marco Campenní, Giulia Andrighetto, Federico Cecconi & Rosaria Conte - 2009 - Mind and Society 8 (2):153-172.
    The necessity to model the mental ingredients of norm compliance is a controversial issue within the study of norms. So far, the simulation-based study of norm emergence has shown a prevailing tendency to model norm conformity as a thoughtless behavior, emerging from social learning and imitation rather than from specific, norm-related mental representations. In this paper, the opposite stance—namely, a view of norms as hybrid, two-faceted phenomena, including a behavioral/social and an internal/mental side—is taken. Such a view is aimed at (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  37
    A generic distributed simulation system for intelligent agent design and evaluation.John Anderson - forthcoming - Proceedings of the Tenth Conference on Ai, Simulation and Planning, Ais-2000, Society for Computer Simulation International.
  29. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  21
    Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs.Laura Moradbakhti, Simon Schreibelmayr & Martina Mara - 2022 - Frontiers in Psychology 13.
    Artificial Intelligence is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs, namely autonomy, competence, and relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  24
    Emotional Intelligence and Coping Mechanisms among Selected Call Center Agents in Cebu City (2nd edition).Mark Anthony Polinar - 2023 - International Journal of Open-Access, Interdisicplinary and New Educational Discoveries of Etcor Educational Research Center (3):827-838.
    This study evaluated how call center agents manage their emotions when interacting with customers with different emotional states. The coping mechanisms employees develop through experience can impact their communication and satisfaction with customer service. A study was conducted using a descriptive-correlational design in three Business Process Outsourcing companies in Cebu City, Philippines. The study aimed to determine employees' agreement and effectiveness in self-awareness, self-management, social awareness, and relationship management. An online sample size calculator was used to gather data, and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  42
    Intelligent virtual agents as language trainers facilitate multilingualism.Manuela Macedonia, Iris Groher & Friedrich Roithmayr - 2014 - Frontiers in Psychology 5:86783.
    In this paper we introduce a new generation of language trainers: intelligent virtual agents (IVAs) with human appearance and the capability to teach foreign language vocabulary. We report results from studies that we have conducted with Billie, an IVA employed as a vocabulary trainer, as well as research findings on the acceptance of the agent as a trainer by adults and children. The results show that Billie can train humans as well as a human teacher can and that (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  50
    Rae Earnshaw and John Vince (eds): Intelligent agents for mobile and virtual media. [REVIEW]Richard Ennals - 2004 - AI and Society 18 (1):84-85.
  34. Can agent-causation be rendered intelligible?: an essay on the etiology of free action.Andrei A. Buckareff - 1999 - Dissertation, Texas a&M University
    The doctrine of agent-causation has been suggested by many interested in defending libertarian theories of free action to provide the conceptual apparatus necessary to make the notion of incompatibility freedom intelligible. In the present essay the conceptual viability of the doctrine of agent-causation will be assessed. It will be argued that agent-causation is, insofar as it is irreducible to event-causation, mysterious at best, totally unintelligible at worst. First, the arguments for agent-causation made by such eighteenth-century luminaries as Samuel Clarke and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  75
    Artificial intelligence and conversational agent evolution – a cautionary tale of the benefits and pitfalls of advanced technology in education, academic research, and practice.Curtis C. Cain, Carlos D. Buskey & Gloria J. Washington - 2023 - Journal of Information, Communication and Ethics in Society 21 (4):394-405.
    Purpose The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also highlighting the need for vigilant monitoring to prevent unethical applications. Design/methodology/approach As AI becomes more prevalent in academia and research, it is crucial to explore ways to ensure ethical usage of the technology and to identify potentially unethical usage. This manuscript uses a popular AI chatbot to write the introduction and parts of the body of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  31
    GRASP agents: social first, intelligent later.Gert Jan Hofstede - 2019 - AI and Society 34 (3):535-543.
    This paper urges that if we wish to give social intelligence to our agents, it pays to look at how we acquired our social intelligence ourselves. We are born with drives and motives that are innate and deeply social. Next, as children we are socialized to acquire norms and values and to understand rituals large and small. These social elements are the core of our being. We capture them in the acronym GRASP: Groups, Rituals, Affiliation, Status, Power. As a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  25
    Agents and Artificial Intelligence.Jasper van den Herik, A. Rocha & J. Filipe (eds.) - 2017 - Springer.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  39. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  40.  39
    Embodied Intelligence: Smooth Coping in the Learning Intelligent Decision Agent Cognitive Architecture.Christian Kronsted, Sean Kugele, Zachariah A. Neemeh, Kevin J. Ryan & Stan Franklin - 2022 - Frontiers in Psychology 13.
    Much of our everyday, embodied action comes in the form of smooth coping. Smooth coping is skillful action that has become habituated and ingrained, generally placing less stress on cognitive load than considered and deliberative thought and action. When performed with skill and expertise, walking, driving, skiing, musical performances, and short-order cooking are all examples of the phenomenon. Smooth coping is characterized by its rapidity and relative lack of reflection, both being hallmarks of automatization. Deliberative and reflective actions provide the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  49
    Artificial intelligence as a discursive practice: the case of embodied software agent systems. [REVIEW]Sean Zdenek - 2003 - AI and Society 17 (3-4):340-363.
    In this paper, I explore some of the ways in which Artificial Intelligence (AI) is mediated discursively. I assume that AI is informed by an “ancestral dream” to reproduce nature by artificial means. This dream drives the production of “cyborg discourse”, which hinges on the belief that human nature (especially intelligence) can be reduced to symbol manipulation and hence replicated in a machine. Cyborg discourse, I suggest, produces AI systems by rhetorical means; it does not merely describe AI systems or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42.  54
    Artificial Intelligence and Agentive Cognition: A Logico-linguistic Approach.Aziz Zambak & Roger Vergauwen - 2009 - Logique Et Analyse 52 (205):57-96.
  43. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44. Generic Intelligent Systems-Agent Systems-Automatic Classification for Grouping Designs in Fashion Design Recommendation Agent System.Kyung-Yong Jung - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes In Computer Science. Springer Verlag. pp. 4251--310.
  45.  53
    Pollock on token physicalism, agent materialism and strong artificial intelligence.Dale Jacquette - 1993 - International Studies in the Philosophy of Science 7 (2):127-140.
    An examination of John Pollock's theory of artificial intelligence and philosophy of mind raises difficulties for his mechanist concept of person. Token physicalism, agent materialism, and strong artificial intelligence are so related that if the first two propositions are not well‐established, then there is no justification for believing that an artificial consciousness can be designed and built. Pollock's arguments are shown to be inconclusive in upholding a functionalist theory of persons as supervenient but purely physical entities. In part this is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  55
    Tools and/or Agents? Reflections on Sedlakova and Trachsel’s Discussion of Conversational Artificial Intelligence.Sven Nyholm - 2023 - American Journal of Bioethics 23 (5):17-19.
    Sedlakova and Trachsel (2023) consider conversational artificial intelligence (CAI) as a new way of providing psychotherapy to patients. This is an important topic, and Sedlakova and Trachsel have...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Averroes and Aquinas on the Agent Intellect's Causation of Intelligibles.Therese Scarpelli Cory - 2015 - Recherches de Theologie Et Philosophie Medievales 82:1-60.
    This article examines two medieval thinkers—Averroes and Aquinas—on the kind of causation exercised by the agent intellect in “abstracting” or producing intelligibles from images in the imagination. It argues that abstraction in these thinkers should be interpreted in causal terms, as an act whereby images in the imagination, through the power of the agent intellect, educe their intelligible likeness in a receptive intellect. This Averroan-Thomistic causal approach to abstraction offers an intriguing alternative to the usual approach to abstraction as an (...)
     
    Export citation  
     
    Bookmark   2 citations  
  48.  41
    Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents.Travis N. Rieder, Brian Hutler & Debra J. H. Mathews - 2020 - American Journal of Bioethics Neuroscience 11 (2):120-127.
  49. AI in Education and Intelligent Tutoring Systems-Intelligent Learning Objects: An Agent Approach to Create Reusable Intelligent Learning Environments with Learning Objects.Ricardo Azambuja Silveira, Eduardo Rodrigues Gomes & Rosa Viccari - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes In Computer Science. Springer Verlag. pp. 17-26.
     
    Export citation  
     
    Bookmark  
  50. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   40 citations  
1 — 50 / 965