Abstract
Artificial intelligence (AI) is transforming industries, generating both enthusiasm and concern. While AI-driven innovations enhance productivity, healthcare, and education, significant ethical issues persist, including misinformation, algorithmic bias, and job displacement. This study examines public perceptions of AI by analyzing large-scale social media discourse and integrating sentiment analysis with expert insights via the Delphi method to assess global perspectives. Findings reveal notable differences across socio-economic contexts. In high-income countries, discussions emphasize AI ethics, governance, and automation risks, whereas in low-income regions, economic challenges and accessibility barriers dominate concerns. Public trust in AI is significantly influenced by governance frameworks, transparency in algorithmic decision-making, and regulatory oversight. Recent advancements in AI governance highlight the increasing role of explainable AI (XAI) and algorithmic fairness, alongside regulatory developments tailored to societal needs. Algorithmic nudging has emerged as a tool for guiding user behavior while maintaining autonomy, and research on user sensemaking in fairness and transparency underscores the importance of interpretability tools in fostering trust and acceptance of AI-driven decisions. These insights emphasize the need for adaptive, context-specific policies that ensure ethical AI deployment while mitigating risks. By bridging public sentiment analysis with governance research, this study provides a comprehensive understanding of AI’s societal impact. Findings offer practical implications for policymakers, industry leaders, and researchers, contributing to the development of inclusive, transparent, and accountable AI governance frameworks that align with public expectations.