Abstract
Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics. First, we need to consider a discontinuous situation that is overlooked in the article by Jecker et al. This discontinuity refers to the phenomenon where X-Risk is perceived as dominating the discourse, yet contrary to expectations, it does not lead to a significant allocation of social resources for specific risk management and practical initiatives. In both the specific realm of ethical AI initiatives and the broader scope of AI risk management, the responses to X-Risk do not have the advantage of prioritising the allocation of resources over other related risks.2 This discrepancy suggests that, in terms of actual social resource allocation, X-Risks do not receive commensurate resources relative to the attention they attract. Unlike other types of risks, X-Risk is perceived as a distant threat, which does not correspond with actual initiatives in terms of media exposure. Despite the prominence of the longtermism view in various media and public discourse, the X-Risk of AI often serves merely as a cautionary note or opinion about the current situation. This suggests that concerns about an AI-driven catastrophe have not been effectively translated into practical initiatives. The gap between the attention drawn and …