Abstract
While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been very little discussion of whether AI systems could have free will. I sketch a framework for thinking about this question, inspired by Daniel Dennett’s work. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, we should simply ask whether we have good explanatory reasons to view the system as an intentional agent, with the capacity for choice between alternative possibilities and control over the resulting actions. If the answer is “yes”, then the system counts as having free will in a pragmatic and diagnostically useful sense.