Imagine that you and I are in my laboratory, and I show you a Big Red Button. I tell you that if I press this button, then you and all your family and friends – in fact the whole human race – will live very long lives of great prosperity, and in great health. Furthermore, the environment will improve, and inequality will reduce both in your country and around the world.
Of course, I add, there is a catch. If I press this button, there is also a chance that the whole human race will go extinct. I cannot tell you the probability of this happening, but I estimate it somewhere between 2% and 25% within five to ten years.
In this imaginary situation, would you want me to go ahead and press the button, or would you urge me not to?
I have posed this question several times while giving keynote talks around the world, and the result is always the same. A few brave souls raise their hands to say yes. The majority of the audience laughs nervously, and gradually raises their hands to say no. And a surprising number of people seem to have no opinion either way. My guess is that this third group don’t think the question is serious.
It is serious. If we continue to develop advanced AI at anything like the rate we are now, then in some decades or years, someone will develop the world’s first superintelligence. By which I mean a machine which exceeds human capability in all cognitive tasks. The intelligence of machines can be improved and ours cannot, so it will go on, probably quite quickly, to become much, much more intelligent than us.
Some people think that the arrival of superintelligence on this planet inevitably means that we will quickly go extinct. I don’t agree with this, but extinction is a possible outcome that I think we should take seriously.
So why is there no great outcry about AI? Why are there no massive street protests and letters to MPs and newspapers, demanding the immediate regulation of advanced AI, and indeed a halt to its development? The idea of a halt was proposed forcefully back in March by the Future of Life Institute, a reputable think tank in Massachusetts. It garnered a lot of signatures from people who understand AI, and it generated a lot of media attention. But it didn’t capture the public imagination. Why?
I think the answer is that most people are extremely confused about AI. They have a vague sense that they don’t like where it is heading, but they aren’t sure if it they should take it seriously, or dismiss it as science fiction.
This is entirely understandable. The science of AI got started in 1956 at a conference at Dartmouth College in New Hampshire, but until 2012 it made very little impact on the world. You couldn’t see it or smell it, and crucially, it didn’t make any money. Even after the Big Bang in 2012 which introduced deep learning, advanced AI was pretty much the preserve of Big Tech – a few companies in the US and China.
That changed a year ago, with the launch of ChatGPT, and even more so in March, with the launch of GPT-4. Finally, ordinary people could get their hands on an advanced AI model and play with it. They could get a sense of its astonishing capabilities. And yet there is still no widespread demand for the regulation of advanced AI. No major political party in the world has among its top three priorities the regulation of advanced AI to ensure that superintelligence does not harm us.
To be sure, there are calls for AI to be regulated by governments, and indeed regulation is on its way in the US, China, and the EU, and most other economic areas too. But these moves are not driven by a bottom-up, voter-led groundswell. Ironically, they are driven at least in part by Big Tech itself. Sam Altman of OpenAI, Demis Hassabis of DeepMind, and many other people leading the companies developing advanced AI are more convinced than anyone that superintelligence is coming, and that it could be disastrous as well as glorious.
AI is a complicated subject, and it doesn’t help that opinions vary so widely within the community of people who work on it, or who follow it closely and comment on it. Some people (e.g., Yann LeCun and Andrew Ng) think superintelligence is coming, but not for many decades, while others (Elon Musk and Sam Altman, for instance) think it is just a few years away. A third group holds the bizarre view that superintelligence is a pure bogeyman that was invented by Big Tech in order to distract attention away from the shorter-term harms that they are allegedly causing with AI, by eroding privacy, enshrining bias, poisoning public debate, driving up anxiety levels and so on.
There is also no consensus within the AI community about the likely impact of superintelligence if and when it does arrive. Some think it is certain to usher in some kind of paradise (Peter Diamandis, Ray Kurzweil), while others think it entails inevitable doom (Eliezer Yudkowski, Conor Leahy). Still others think we can figure out how to tame it ahead of time, and constrain its behaviour forever (Max Tegmark, and Yann LeCun again).
Technology evolves because inventors and innovators build one improvement on top of another. This means it evolves within fairly narrow constraints. It is not deterministic, and there is no law of physics which says it will always continue. But our ability to guide it is limited.
Where we have more freedom of action is in adjusting human institutions to moderate the impact of technology as it evolves. This includes government regulation. Advanced AI already affects all of us, whether we are aware of it or not. It will affect all of us much more in the years ahead. We need institutions that can cope with the impact of AI, and this means that we need our political leaders and policy framers to understand AI. This in turn requires all of us to understand what AI is, what it can do, and the discussion about where it is going.
Increasingly, acquiring and maintaining a rudimentary understanding of AI is a fundamental civic duty.