Topoi 32 (2):227-236 (
2013)
Copy
BIBTEX
Abstract
Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the case for computer moral agency: whether they can have the kinds of intentional states that cause their decisions and actions. I give an account that builds on traditional action theory and allows us to conceive of computers as genuine moral agents in virtue of their own causally efficacious intentional states. These states can cause harm or benefit to moral patients, but do not depend on computer consciousness or intelligence