Abstract
The presence of predictive AI has steadily expanded into ever-increasing aspects of civil society. I aim to show that despite reasons for believing the use of such systems is currently problematic, these worries give no indication of their future potential. I argue that the absence of moral limits on how we might manipulate automated systems, together with the likelihood that they are more easily manipulated in the relevant ways than humans, suggests that such systems will eventually outstrip the human ability to make accurate judgments and unbiased predictions. I begin with some reasonable justifications for the use of predictive AI. I then discuss two of the most significant reasons for believing the use of such systems is currently problematic before arguing that neither provides sufficient reason against such systems being superior in the future. In fact, there’s reason to believe they can, in principle, be preferable to human decision makers.