In an event following the UK’s AI Safety Summit, entrepreneur Elon Musk spoke with UK prime minister Rishi Sunak about future AIs most likely being “a force for good” and someday enabling a “future of abundance”.
That utopian narrative about a future superhuman AI – one that Musk claims would eliminate the need for human work and even provide meaningful companionship – shaped much of the conversation between the pair. But their conversation’s focus on an “age of abundance” glossed over the current negative impacts and controversies surrounding the tech industry’s race to develop large AI models – and did not get into specifics on how governments should regulate AI and address real-world risks.
“I think we are seeing the most disruptive force in history here, where we will have for the first time something that is smarter than the smartest human,” said Musk. “There will come a point when no job is needed – you can have a job if you want for personal satisfaction, but the AI will be able to do everything.”
Theoretical versus actual AI risks
Musk also acknowledged his longstanding position of frequently warning about the existential risks that superhuman AI could pose to humanity in the future. In March 2023, he was among the signatories of an open letter that called for a six-month pause in training AI systems more powerful than OpenAI’s GPT-4 large language model.
Sunak also spoke about the role of government for mitigating risks from AI. “My job in government is to say hang on, there is a potential risk here, not a definite risk but a potential risk of something that could be bad,” said Sunak. “My job is to protect the country and we can only do that if we develop that capability in our safety institute and then go in and make sure we can test the models before they are released.”
That grand narrative about a superhuman AI – sometimes referred to as artificial general intelligence or AGI – that “will either deliver us to paradise or will destroy us” can often overshadow the actual negative impacts of current AI technologies, says Emile Torres at Case Western Reserve University in Ohio.
“All of this hype around existential threats associated with super intelligence ultimately just distract from the many real-world harms that [AI] companies already causing,” says Torres.
Torres described such harms as including the environmental impacts of building energy-hungry data centres to support AI training and deployment, tech company exploitation of workers in the Global South to perform gruelling and sometimes traumatising data-labelling tasks that support AI development, and companies training their AI models on the original work of artists and writers such as book authors without having asked permission or paid compensation.
Elon Musk’s record on AI development
Although Sunak described Musk as a “brilliant innovator and technologist” during their conversation, Musk’s involvement in AI development efforts has been more that of a wealthy backer and businessperson.
Musk originally bankrolled OpenAI – which is the developer of AI models such as GPT-4 that power the popular AI chatbot ChatGPT – with $50 million when the organisation first launched as a nonprofit in 2015. But Musk stepped down from OpenAI’s board of directors and stopped contributing funding in 2018 after his bid to lead the organisation was rejected by OpenAI co-founder Sam Altman.
Following his departure, Musk has criticised OpenAI’s subsequent for-profit pivot and multi-billion dollar partnership with Microsoft, although he has not been shy about saying that OpenAI would not exist without him.
In July 2023, Musk announced that he was launching his own new AI company called xAI, with a dozen initial team members who had formerly worked at companies such as DeepMind, OpenAI, Google, Microsoft and Tesla. The xAI team appears to have Musk’s approval to pursue ambitious and vague goals such as “to understand the true nature of the universe.”