Digital Trends (
2022)
Copy
BIBTEX
Abstract
Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.
According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors.
Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.