Abstract
Roboethics is a recently developed field of applied ethics which deals
with the ethical aspects of technologies such as robots, ambient intelligence, direct
neural interfaces and invasive nano-devices and intelligent soft bots. In this article
we look specifically at the issue of (moral) responsibility in artificial intelligent
systems. We argue for a pragmatic approach, where responsibility is seen as a
social regulatory mechanism. We claim that having a system which takes care of
certain tasks intelligently, learning from experience and making autonomous
decisions gives us reasons to talk about a system (an artifact) as being
“responsible” for a task. No doubt, technology is morally significant for humans,
so the “responsibility for a task” with moral consequences could be seen as moral
responsibility. Intelligent systems can be seen as parts of socio-technological
systems with distributed responsibilities, where responsible (moral) agency is a
matter of degree. Knowing that all possible abnormal conditions of a system
operation can never be predicted, and no system can ever be tested for all possible
situations of its use, the responsibility of a producer is to assure proper functioning
of a system under reasonably foreseeable circumstances. Additional safety
measures must however be in place in order to mitigate the consequences of an
accident. The socio-technological system aimed at assuring a beneficial
deployment of intelligent systems has several functional responsibility feedback
loops which must function properly: the awareness and procedures for handling of
risks and responsibilities on the side of designers, producers, implementers and
maintenance personnel as well as the understanding of society at large of the
values and dangers of intelligent technology. The basic precondition for
developing of this socio-technological control system is education of engineers in
ethics and keeping alive the democratic debate on the preferences about future
society.