We must move faster to understand and regulate AI, says Rishi Sunak

UK prime minister Rishi Sunak at the AI Safety Summit at Bletchley Park on 2 November

Justin Tallis/WPA Pool/Getty Images

AI models must be better understood and subject to testing before any mandatory legislation to oversee the industry can be introduced, UK prime minister Rishi Sunak told the AI Safety Summit at Bletchley Park – but he also said that such efforts must be accelerated.

Sunak announced the establishment of a UK AI safety institute last week which will engage with technology companies on a voluntary basis to ensure that their models are safe to roll-out to the public. But the body will not have official regulatory powers and companies will not be compelled to submit to whatever testing protocols they set up.

In a press conference that marked the end of the summit, Sunk said that regulation will ultimately be needed, but should be based on evidence. Large technology companies working on AI have agreed to engage with the new organisation, he said.

“We now have the agreement we need to go and do the testing before the models are released to the public,” said Sunak. “What we can’t do is expect companies to mark their own homework.”

Sunak said that regulation “takes time, and we need to move faster” and also that more information on AI needs to be gathered before effective regulation can be written.

“When the people who are developing it themselves are constantly surprised by what it can do, it’s important that that regulation is empirically-based, that it’s based on scientific evidence,” he said.

But he said he believed that the state has a strong role to play in the future of AI. “Fundamentally, it’s only governments that can test the national security risks. And ultimately, that is the responsibility and knowledge of a sovereign government and – with the involvement of our intelligence agencies, as they have been with all our AI work thus far – that is the job of governments and no one else can do on behalf of them.”

Around 100 politicians, business leaders and academics spent two days at the UK AI Safety Summit in Bletchley Park discussing the potential dangers posed by smarter-than-human artificial intelligence, which Sunak had previously said could be equal to that from nuclear war.

The event was criticised by some for a lack of transparency after a list of governments and organisations in attendance was published by the UK government – but not the names of all the guests. Reporters at the event were also prohibited from mingling with delegates.

But one notable achievement at the summit was the signing of the so-called Bletchley Declaration by 28 countries including the US and China, and the European Union. The document set out that there are risks from AI and that countries should continue to research. The declaration also put a smaller summit on the same topic in South Korea on the calendar within the next six months, and another large-scale conference next year.

But progress was panned as too vague and slow by experts. “We’ve already been slow to regulate AI and reach international agreements on it,” says Carissa Véliz at the University of Oxford. “Having another meeting in six months time doesn’t seem ambitious enough, given the high stakes and the rapid development and implementation of AI.”

The Prime Minister was also due to hold a live-streamed conversation on 2 November with Elon Musk, owner of xAI, which will be broadcast on another of Musk’s company’s social media platform, X, formerly known as Twitter.


Source link

Related Articles

Back to top button