Abstract
A number of concerns have been recently raised regarding the possibility of human agents to effectively maintain control over intelligent and (partially) autonomous artificial systems. These issues have been deemed to raise “responsibility gaps.” To address these gaps, several scholars and other public and private stakeholders converged towards the idea that, in deploying intelligent technology, a meaningful form of human control (MHC) should be at all times exercised over autonomous intelligent technology. One of the main criticisms of the general idea of MHC is that it could be inherently problematic to have high degrees of control and high degrees of autonomy at the same time, as the two dimensions appear to be inversely related. Several ways to respond to this argument and deal with the dilemma between control and autonomy have been proposed in the literature. In this paper, we further contribute to the philosophical effort to overcome the trade-off between automation and human control, and to open up some space for moral responsibility. We will use the instrument of conceptual engineering to investigate whether and to what extent removing the element of direct causal intervention from the concept of control can preserve the main functions of that concept, specifically focusing on the extent it can act as a foundation of moral responsibility. We show that at least one philosophical account of MHC is indeed a conceptually viable theory to absolve the fundamental functions of control, even in the context of completely autonomous artificial systems.