Abstract
The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also has the potential to bring tremendous benefits to humanity by decreasing human-induced harm in the world. After presenting how this dilemma may arise, I explore general avenues for addressing it. I will argue that we do not have to solve this dilemma in an all-or-nothing fashion and that a more nuanced approach may be suitable. However, the main point I want to highlight is that we need to have a principled way of addressing this dilemma, which is currently missing.