Abstract
Should you be targeted by police for a crime that AI predicts you will commit? In this paper, we analyse when, and to what extent, the person-based predictive policing (PP) — using AI technology to identify and handle individuals who are likely to breach the law — could be justifiably employed. We first examine PP’s epistemological limits, and then argue that these defects by no means refrain from its usage; they are worse in humans. Next, based on major AI ethics guidelines (IEEE, EU, and RIKEN, etc.), we refine three basic moral principles specific to person-based PP. We also derive further requirements from case studies, including debates in Chicago, New Orleans, San Francisco, Tokyo, and cities in China. Instead of rejecting PP programs, we analyse what necessary conditions should be met for using the tool to achieve social good. While acknowledging its risks, we conclude that the person-based PP could be beneficial in community policing, especially when merging into a larger governance framework of the social safety net.