Contents
157 found
Order:
1 — 50 / 157
  1. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of advanced foundation (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Nature Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Private memory confers no advantage.Samuel Allen Alexander - forthcoming - Cifma.
    Mathematicians and software developers use the word "function" very differently, and yet, sometimes, things that are in practice implemented using the software developer's "function", are mathematically formalized using the mathematician's "function". This mismatch can lead to inaccurate formalisms. We consider a special case of this meta-problem. Various kinds of agents might, in actual practice, make use of private memory, reading and writing to a memory-bank invisible to the ambient environment. In some sense, we humans do this when we silently subvocalize (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Explicit Legg-Hutter intelligence calculations which suggest non-Archimedean intelligence.Samuel Allen Alexander & Arthur Paul Pedersen - forthcoming - Lecture Notes in Computer Science.
    Are the real numbers rich enough to measure intelligence? We generalize a result of Alexander and Hutter about the so-called Legg-Hutter intelligence measures of reinforcement learning agents. Using the generalized result, we exhibit a paradox: in one particular version of the Legg-Hutter intelligence measure, certain agents all have intelligence 0, even though in a certain sense some of them outperform others. We show that this paradox disappears if we vary the Legg-Hutter intelligence measure to be hyperreal-valued rather than real-valued.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Computer Simulations, Machine Learning and the Laplacean Demon: Opacity in the Case of High Energy Physics.Florian J. Boge & Paul Grünke - forthcoming - In Andreas Kaminski, Michael Resch & Petra Gehring (eds.), The Science and Art of Simulation II.
    In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  6. (1 other version)Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible prospect is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - forthcoming - Proceedings of the Forty-First International Conference on Machine Learning.
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. AI Meets Mindfulness: Redefining Spirituality and Meditation in the Digital Age.R. L. Tripathi - 2025 - The Voice of Creative Research 7 (1):10.
    The combination of spirituality, meditation, and artificial intelligence (AI) has promising potential to expand people’s well-being using technology-based meditation. Proper meditation originates from Zen Buddhism and Patanjali’s Yoga Sutras and focuses on inner peace and intensified consciousness which elective personal disposition. AI, in turn, brings master means of delivering those practices in the form of self-improving systems that customize and make access to them easier. However, such an integration brings major philosophical and ethical issues into question, including the genuineness of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Ontologies, arguments, and Large-Language Models.John Beverley, Francesco Franda, Hedi Karray, Dan Maxwell, Carter Benson & Barry Smith - 2024 - In Ítalo Oliveira (ed.), Joint Ontologies Workshops (JOWO). Twente, Netherlands: CEUR. pp. 1-9.
    Abstract The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly, there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification for model outputs as well as traceability concerning the impact of evidence (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Chess AI does not know chess - The death of Type B strategy and its philosophical implications.Spyridon Kakos - 2024 - Harmonia Philosophica Articles.
    Playing chess is one of the first sectors of human thinking that were conquered by computers. From the historical win of Deep Blue against chess champion Garry Kasparov until today, computers have completely dominated the world of chess leaving no room for question as to who is the king in this sport. However, the better computers become in chess the more obvious their basic disadvantage becomes: Even though they can defeat any human in chess and play phenomenally great and intuitive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Diagrammatic Representation and Inference 14th International Conference, Diagrams 2024, Münster, Germany, September 27 – October 1, 2024, Proceedings.Jens Lemanski, Mikkel Willum Johansen, Emmanuel Manalo, Petrucio Viana, Reetu Bhattacharjee & Richard Burns (eds.) - 2024 - Cham: Springer.
    This book constitutes the refereed proceedings of the 14th International Conference on the Theory and Application of Diagrams, Diagrams 2024, held in Münster, Germany, during September 27–October 1, 2024. -/- The 17 full papers, 19 short papers and 11 papers of other types included in this book were carefully reviewed and selected from 69 submissions. They were organized in topical sections as follows: Keynote Talks; Analysis of Diagrams; Euler and Venn Diagrams; Diagrams in Logic; Diagrams and Applications; Diagram Tools; Historical (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18. Are generics and negativity about social groups common on social media? A comparative analysis of Twitter (X) data.Uwe Peters & Ignacio Ojea Quintana - 2024 - Synthese 203 (6):1-22.
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested that generics (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. What is a machine? Exploring the meaning of ‘artificial’ in ‘artificial intelligence’.Stefan Schulz & Janna Hastings - 2024 - Cosmos+Taxis 12 (5+6):37-41.
    Landgrebe and Smith provide an argument for the impossibility of Artificial General Intelligence based on the limits of simulating complex systems. However, their argument presupposes a very contemporary vision of artificial intelligence as a model trained on data to produce an algorithm executable in a modern digital computing system. The present contribution explores what it means to be artificial. Current artificial intelligence approaches on modern computing systems are not the only conceivable way in which artificial intelligence technology might be created. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. La guerre électronique et l'intelligence artificielle.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    La guerre électronique est un élément essentiel des opérations militaires modernes et a connu des progrès significatifs ces dernières années. Ce livre donne un aperçu de la guerre électronique, de son évolution historique, de ses composants clés et de son rôle dans les scénarios de conflit contemporains. Il aborde également les tendances et les défis émergents en matière de guerre électronique et sa pertinence contemporaine à l'ère des technologies avancées et des cybermenaces, en soulignant la nécessité de poursuivre la recherche (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Methods for identifying emergent concepts in deep neural networks.Tim Räz - 2023 - Patterns 4.
  23. The future won’t be pretty: The nature and value of ugly, AI-designed experiments.Michael T. Stuart - 2023 - In Milena Ivanova & Alice Murphy (eds.), The Aesthetics of Scientific Experiments. New York, NY: Routledge.
    Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - 2022 - Journal of Artificial General Intelligence 13 (1).
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Concern Across Scales: a biologically inspired embodied artificial intelligence.Matthew Sims - 2022 - Frontiers in Neurorobotics 1 (Bio A.I. - From Embodied Cogniti).
    Intelligence in current AI research is measured according to designer-assigned tasks that lack any relevance for an agent itself. As such, tasks and their evaluation reveal a lot more about our intelligence than the possible intelligence of agents that we design and evaluate. As a possible first step in remedying this, this article introduces the notion of “self-concern,” a property of a complex system that describes its tendency to bring about states that are compatible with its continued self-maintenance. Self-concern, as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe.Samuel Allen Alexander - 2021 - In Samuel Allen Alexander & Marcus Hutter (eds.), AGI.
    This is a paper about the general theory of measuring or estimating social intelligence via benchmarks. Hernández-Orallo and Dowe described a problem with certain proposed intelligence measures. The problem suggests that those intelligence measures might not accurately capture social intelligence. We argue that Hernández-Orallo and Dowe's problem is even more general than how they stated it, applying to many subdomains of AGI, not just the one subdomain in which they stated it. We then propose a solution. In our solution, instead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - 2021 - In Samuel Allen Alexander & Marcus Hutter (eds.), AGI.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  34. AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration.Samuel Alexander - 2020 - Agi.
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Ontology and Cognitive Outcomes.David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James Llinas & Barry Smith - 2020 - Journal of Knowledge Structures and Systems 1 (1): 3-22.
    The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Cosa significano Paraconsistente, Indecifrabile, Casuale, Calcolabile e Incompleto? Una recensione di Godel's Way: sfrutta in un mondo indecidibile (Godel's Way: Exploits into an Undecidable World) di Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa 160p (2012) (rivisto 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 163-176.
    Nel 'Godel's Way' tre eminenti scienziati discutono questioni come l'indecidibilità, l'incompletezza, la casualità, la computabilità e la paracoerenza. Affronto questi problemi dal punto di vista di Wittgensteinian che ci sono due questioni fondamentali che hanno soluzioni completamente diverse. Ci sono le questioni scientifiche o empiriche, che sono fatti sul mondo che devono essere studiati in modo osservante e filosofico su come il linguaggio può essere usato in modo intelligibilmente (che include alcune domande in matematica e logica), che devono essere decise (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. कैसे सात Socipaths जो चीन शासन कर रहे हैं विश्व युद्ध तीन और तीन तरीके उन्हें रोकने के लिए How the Seven Sociopaths Who Rule China are Winning World War and Three and Three Ways to Stop Them (2019).Michael Richard Starks - 2020 - In पृथ्वी पर नर्क में आपका स्वागत है: शिशुओं, जलवायु परिवर्तन, बिटकॉइन, कार्टेल, चीन, लोकतंत्र, विविधता, समानता, हैकर्स, मानव अधिकार, इस्लाम, उदारवाद, समृद्धि, वेब, अराजकता, भुखमरी, बीमारी, हिंसा, कृत्रिम बुद्धिमत्ता, युद्ध. Ls Vegas, NV USA: Reality Press. pp. 389-396.
    पहली बात हमें ध्यान में रखना चाहिए कि जब यह कहना है कि चीन यह कहता है या चीन ऐसा करता है, तो हम चीनी लोगों की बात नहीं कर रहे हैं, लेकिन उन सोशियोपैथों की जो सीसीपी (चीनी कम्युनिस्ट पार्टी, अर्थात सात सेनेले सोसोपैथिक सीरियल किलर (एसएसएसएसके) का नियंत्रण करते हैं। सीपी या पोलितब्यूरो के 25 सदस्यों की टंडिंग समिति। मैं हाल ही में कुछ ठेठ वामपंथी नकली समाचार कार्यक्रमों को देखा (सुंदर बहुत ही तरह एक ही तरह से (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. The role of robotics and AI in technologically mediated human evolution: a constructive proposal.Jeffrey White - 2020 - AI and Society 35 (1):177-185.
    This paper proposes that existing computational modeling research programs may be combined into platforms for the information of public policy. The main idea is that computational models at select levels of organization may be integrated in natural terms describing biological cognition, thereby normalizing a platform for predictive simulations able to account for both human and environmental costs associated with different action plans and institutional arrangements over short and long time spans while minimizing computational requirements. Building from established research programs, the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène physique. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Chess, Artificial Intelligence, and Epistemic Opacity.Paul Grünke - 2019 - Információs Társadalom 19 (4):7--17.
    In 2017 AlphaZero, a neural network-based chess engine shook the chess world by convincingly beating Stockfish, the highest-rated chess engine. In this paper, I describe the technical differences between the two chess engines and based on that, I discuss the impact of the modeling choices on the respective epistemic opacities. I argue that the success of AlphaZero’s approach with neural networks and reinforcement learning is counterbalanced by an increase in the epistemic opacity of the resulting model.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Present Scenario of Fog Computing and Hopes for Future Research.G. KSoni, B. Hiren Bhatt & P. Dhaval Patel - 2019 - International Journal of Computer Sciences and Engineering 7 (9).
    According to the forecast that billions of devices will get connected to the Internet by 2020. All these devices will produce a huge amount of data that will have to be handled rapidly and in a feasible manner. It will become a challenge for real-time applications to handle this huge data while considering security issues as well as time constraints. The main highlights of cloud computing are on-demand service and scalability; therefore the data generated from IoT devices are generally handled (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. (1 other version)Data science and molecular biology: prediction and mechanistic explanation.Ezequiel López-Rubio & Emanuele Ratti - 2019 - Synthese (4):1-26.
    In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  49. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  50. The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI.Fernando Martínez-Plumed, Bao Sheng Loe, Peter Flach, Sean O. O. HEigeartaigh, Karina Vold & José Hernández-Orallo - 2018 - In Fernando Martínez-Plumed, Bao Sheng Loe, Peter Flach, Sean O. O. HEigeartaigh, Karina Vold & José Hernández-Orallo (eds.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Evolution of the contours of AI. pp. 5180-5187.
    We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
1 — 50 / 157