Abstract
Responsible AI design and deployment in alliance with the Sustainable Development Goals needs to be understood in terms of power positioning and vested interests that precede and predetermine the sincerity of ‘AI for social good’. A power analysis is employed to chart the asymmetries of knowledge/information and control enabled by tech companies’ cyberpower, revealing the risks associated with AI technology as another economic dependency regime disproportionately falling on marginalised communities and populations in the Global South. Where the values of tech are misaligned with societies’, this threatens the social and cultural fabric that is vital for resilient societies.The authors introduce the enabling vision of AI in community, proposing to disperse powerPowers through the application of AI to contextualise technological sustainability. Power held by Big Tech companies should be dispersed within recipient communities through information sharing and sustainable engagement, so that communities can determine what technology they need for the indigenous purposes they value and prioritise. The notion of safe digital spaces through digital self-determination provides the mechanism for community empowerment. With trusted social bonding at the AI-human interface, AI in community offers a repositioning of tech to serve communities and assist the achievement of the 2030 Agenda for Sustainable Development.