Abstract
In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI is opaque is very different than the way in which humans are opaque. What matters is the degree to which the opaque processes of a class of decision makers are stable, uniform, and safe. The degree to which such processes have these features in humans is higher than the degree to which such processes have these features in opaque AI. And therefore we should require AI to be more transparent than humans.