What is AI superintelligence? Could it destroy humanity? And is it really almost here?

Sydney: In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies.

It proved highly influential in promoting the idea that advanced AI systems – “superintelligences” more capable than humans – might one day take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away.

In his seminal work on artificial intelligence, titled Superintelligence: Paths, Dangers, Strategies, the Oxford University professor posits that AI may well destroy us if we are not sufficiently prepared. Superintelligence, which he describes as an artificial intelligence that “greatly exceeds the cognitive performance of humans in virtually all domains of interest”, maybe a lot closer than many realise, with AI experts and leading industry figures warning that it may be just a few years away.

Last week, Open AI boss Sam Altman, whose company created ChatGPT, echoed Professor Bostrom’s 2014 book by warning that the seemingly exponential progress of AI technology in recent years means that the imminent arrival of superintelligence is inevitable – and we need to start preparing for it before it’s too late.

On Tuesday, he was among other notable signatories of a statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Altman, whose company’s AI chatbot is the fastest-growing app in history, has previously described Professor Bostrom’s book as “the best thing I’ve seen on this topic”. Just a year after reading it, Altman co-founded OpenAI alongside other similarly worried tech leaders like Elon Musk and Ilya Sutskever in order to better understand and mitigate the risks of advanced artificial intelligence.

Initially launched as a non-profit, OpenAI has since transformed into arguably the leading private AI firm – and potentially the closest to achieving superintelligence.

Mr Altman believes superintelligence has the potential to not only offer us a life of leisure by doing the majority of our labour, but also holds the key to curing diseases, eliminating suffering and transforming humanity into an interstellar species.

Any attempts to block its progress, he wrote this week, would be ‘unintuitively risky” and would require “something like a global surveillance regime” that would be virtually impossible to implement.

It is already difficult to understand what is going on inside the ‘mind’ of AI tools currently available, but once superintelligence is achieved, even its actions may become incomprehensible. It could make discoveries that we would be incapable of understanding, or take decisions that make no sense to us. The biological and evolutionary limitations of brains made of organic matter mean we may need some form of brain-computer interface to keep up.

Being unable to compete with AI in this new technological era, Professor Bostrom warns, could see humanity replaced as the dominant lifeform on Earth. The superintelligence may then see us as superfluous to its own goals.  If this happens, and some form of AI has figured out how to hijack all the utilities and technology we rely upon – or even the nuclear weapons we possess – then it would not take long for AI to wipe us off the face of the planet.

A more benign, but similarly bleak, scenario is that the gulf in intelligence between us and the AI will mean it views us in the same way we view animals. In a 2015 conversation between Musk and scientist Neil deGrasse Tyson, they theorized that AI will treat us like a pet labrador. “They’ll domesticate us,” Professor Tyson said. “They’ll keep the docile humans and get rid of the violent ones.”

To prevent this outcome, Musk has dedicated a portion of his immense fortune towards funding a brain chip startup called Neuralink. The device has already been tested on monkeys, allowing them to play video games with their minds, and the ultimate goal is to transform humans into a form of hybrid superintelligence. (Critics note that even if successful, the technology would similarly create a two-tiered society of the chipped, and the chipless.)

Since cutting ties with OpenAI, the tech billionaire has issued several warnings about the imminent emergence of superintelligence. In March, he joined more than 1,000 researchers in calling for a moratorium on the development of powerful AI systems for at least six months. That time should then be spent researching AI safety measures, they wrote in an open letter, in order to avert disaster.

It would take an improbable consensus of leading AI companies around the world, the majority of which are all profit-seeking, in order for any such pause to be impactful. And while OpenAI continues to spearhead the hunt for the owl’s egg, Mr Altman appears to have at least heeded the warnings from Professor Bostrom’s fable.

In a 2016 interview with the New Yorker, he revealed that he is a doomsday prepper – specifically for an AI-driven apocalypse. “I try not to think about it too much, he said, revealing that he has “guns, gold, potassium iodide, antibiotics, batteries, water [and] gas masks” stashed away in a hideout in rural California. Not that any of that will be much use to the rest of us.

Leave a Reply

Your email address will not be published. Required fields are marked *