Subjectivity of Explainable Artificial Intelligence

Russian Journal of Philosophical Sciences 65 (1):72-90 (2022)
  Copy   BIBTEX

Abstract

The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,795

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
The missing G.Erez Firt - 2020 - AI and Society 35 (4):995-1007.

Analytics

Added to PP
2023-01-06

Downloads
24 (#919,459)

6 months
8 (#613,944)

Historical graph of downloads
How can I increase my downloads?