Abstract
We motivate the desire for self-driving and explain its potential and limitations, and explore the need for—and potential implementation of—morals, ethics, and other value systems as complementary “capabilities” to the Deep Technologies behind self-driving. We consider how the incorporation of such systems may drive or slow adoption of high automation within vehicles. First, we explore the role for morals, ethics, and other value systems in self-driving through a representative hypothetical dilemma faced by a self-driving car. Through the lens of engineering, we explain in simple terms common moral and ethical frameworks including utilitarianism, deontology, and virtue ethics before characterizing their relationship to the fundamental algorithms enabling self-driving. The concepts of behavior cloning, state-based modeling, and reinforcement learning are introduced, with some algorithms being more suitable for the implementation of value systems than others. We touch upon the contemporary cross-disciplinary landscape of morals and ethics in self-driving systems from a joint philosophical and technical perspective, and close with considerations for practitioners and the public, particularly as individuals may not appreciate the nuance and complexity of using imperfect information to navigate diverse scenarios and tough-to-quantify value systems, while “typical” software development reduces complex problems to black and white decision-making.