Abstract
As we step into an era in which artificial intelligence systems are predicted to surpass human capabilities, a number of profound ethical questions have emerged. One such question, which has gained some traction in recent scholarship, concerns the ethics of human treatment of robots and the thought-provoking possibility of robot rights. The present article explores this very aspect, with a particular focus on the notion of human rights for robots. It argues that if we accept the widely held view that moral status and rights (including human rights) are grounded in certain cognitive capacities, then it follows that intelligent machines could, in principle, acquire these entitlements once they come to possess the requisite properties. In support of this perspective, the article outlines the moral foundations of human rights and examines several main objections, arguing that they do not successfully negate the prospect of considering robots as potential holders of human rights. Subsequently, it turns to the key epistemic challenges associated with moral status and rights for robots, outlining the main difficulties in discerning the presence of mental states in artificial entities and offering some practical considerations for approaching these challenges. The article concludes by emphasizing the importance of establishing a suitable framework for moral decision-making under uncertainty in the context of human treatment of artificial entities, given the gravity of the epistemic problems surrounding the concepts of artificial consciousness, moral status, and rights.