The taboo is broken. The possibility that AI is an existential risk has now been voiced in public by many of the world’s political leaders. Although the question has been discussed in Silicon Valley and other futurist boltholes for decades, no country’s leader had broached it before last month. That is the lasting legacy of the Bletchley Park summit on AI Safety, and it is an important one.
It might not be the most important legacy for the man who made the summit happen. According to members of the opposition Labour Party, Britain’s Prime Minister Rishi Sunak was using the event to look for his next job. Faced with chaos in the Tory party, and a potentially damaging enquiry into his role in the management of Covid, he appears to be heading towards catastrophic defeat in the forthcoming general election. The lifestyle of another former British political leader, Nick Clegg, who gets paid a reported $30 million a year by Facebook to be Mark Zuckerberg’s (not terribly effective) flak catcher, must look attractive to Mr Sunak. His on-stage discussion with Elon Musk after the summit was described by several of the attending journalists as an embarrassingly fawning job application.
Cynics point to the fact that the summit was attended by very few heads of state. President Biden sent his deputy, vice president Kamala Harris, and Chancellor Scholtz of Germany and President Macron of France were notable for their absence. The announcement of a UK AI safety institute was upstaged by the announcement the day before the summit that the US would do the same. There is room in the world for more than one safety institute, but given that most of the world’s most advanced AI models are developed by US-owned companies, and the rest by Chinese ones, it is obvious which of these two institutes will be the more significant. The EU has the market power, thanks to its 450 million relatively wealthy consumers, to enforce regulations on big tech, even though it is home to none of them (unless you count Spotify). The UK is not. In AI as in other industries, the rules of the road will be determined where the roads and the cars are made.
Nevertheless, the Bletchley Park summit has got the world’s leaders talking seriously – for the first time – about the longer-term risks from AI, as well as about its staggering potential upsides. It took political courage to keep the longer-term aspects on the agenda when many pressure groups proclaim that the shorter-term risks are far more important, like privacy, bias, mass dis-information and industrial-scale personalised hacking. These risks are certainly important, but the idea that ensuring a future superintelligence is safe is a trivial or worthless endeavour is complacent and absurd. Even more risible is the claim, made seriously by some, that “tech bros” promulgate the idea of existential risk to deflect attention from the short-term harms they are causing, or planning to cause.
Another brave decision that the UK government made and stuck to was to invite China to the summit. China hawks like former PM Liz Truss railed against an invitation being extended to a country that spies against Britain. It is surprising that Ms Truss’ opinions continue to receive attention after her short-lived and disastrous tenure. Also, does anybody seriously think that the UK doesn’t spy on China in return? But in any case, with China being one of the only two countries that really matter in the global AI industry, excluding them would have been a mistake.
Whatever its shortcomings, the Bletchley Park summit has got the show on the road. Matt Clifford, the tech entrepreneur seconded to convene the event, deserves considerable praise. The world’s political leaders have spoken publicly about existential risk, and there is no going back. Another summit will be held in six months’ time, in South Korea, and a year from now the French will pick up the baton. This process may turn out to be by far the most positive part of Sunak’s legacy.
The declaration signed at the end of the summit by representatives of 28 governments does not actually use the word “existential”, but it tiptoes right up to the edge: “Substantial risks may arise from … issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict… There is potential for serious, even catastrophic, harm”.
The declaration is vague about what measures should be taken to ensure that advanced AI is safe for humanity, but it would be wholly unreasonable to expect a single event to raise such a fundamental question for the first time and also answer it definitively. Figuring out how to navigate the path towards superintelligence is the most important challenge humanity faces this century – and perhaps ever. Getting it right will take many more summits, and innumerable discussions in all parts of society.
The UN has just announced a high-level advisory body on AI, and the G7 has published a voluntary code of conduct for AI developers. There are calls for the establishment of organisations to do for AI what the IPCC does for global warming, and what CERN does for nuclear research. The debate about whether regulation will help or hinder the development of beneficial AI will rage for years. In a field as complex, fast-moving, and capital-hungry as advanced AI, it will inevitably be challenging for regulators to keep up with and even stay ahead of the organisations that develop the technology. There is a genuine danger of regulatory capture, in which regulators end up imposing rules which entrench big tech’s first-mover advantages.
But it is simply unacceptable to say that regulation is hard, and therefore the industry should go unregulated. We elect politicians to make decisions on our behalf, and they establish and direct regulators to make sure that powerful organisations play nicely. The AI industry and its cheerleaders cannot tell regulators and politicians (and by extension the rest of us) that our most powerful technology is something we are not smart enough to understand, and we should therefore leave the industry to do whatever it fancies.
It has been apparent for some years that AI was improving remarkably fast, and that the future foretold by science fiction was hurtling towards us, but until recently, most of us were not paying serious attention. I used to think that the arrival of self-driving cars would be the alarm clock that would awake people from their slumber; instead it was ChatGPT and GPT-4. The Bletchley Park summit has disabled the snooze button.