On 14 March, OpenAI launched GPT-4. People who follow AI closely were stunned by its capabilities. A week later, the US-based Future of Life Institute published an open letter urging the people who run the labs creating Large Language Models (LLMs) to declare a six-month moratorium, so that the world could make sure this increasingly powerful technology is safe. The people running those labs – notably Sam Altman of OpenAI and Demis Hassabis of Google DeepMind – have called for government regulation of their industry, but they are not declaring a moratorium.
What’s all the fuss about? Is advanced AI really so dangerous? In a word, yes. We can’t predict the future, so we can’t be sure what future AIs will and will not be able to do. But we do know that their capability depends to a large degree on the amount of computational power available to them, and the quantity (and to a lesser extent the quality) of the data they are trained on. The amount of computational horsepower that $1,000 buys has been growing exponentially for decades, and despite what some people say, it is likely to continue doing so for years to come. We might be reaching the limits of data available, since the latest LLMs have been trained on most of the data on the internet, but that has also doubled every couple of years recently, and will probably continue to do so.
So AIs are going to carry on getting more powerful at an exponential rate. It is important to understand what that word “exponential” means. Imagine that you are in a football stadium (either soccer or American football will do) which has been sealed to make it water-proof. The referee places a single drop of water in the middle of the pitch. One minute later he places two drops there. Another minute later, four drops, and so on. How long do you think it would take to fill the stadium with water? The answer is 49 minutes. But what is really surprising – and disturbing – is that after 45 minutes, the stadium is just 7% full. The people in the back seats are looking down and pointing out to each other that something curious is happening. Four minutes later they have drowned. That is the power of exponential growth.
Powerful AIs generate two kinds of risk, and we should be thinking carefully about both of them. The first type is sometimes called AI Ethics, but a better term is Responsible AI. This covers concerns such as privacy, bias, mass personalised hacking attacks, deep fakes, and lack of explainability. These are problems today, and will become more so as the machines get smarter. But they are not the things we worry about when we talk about machines getting smarter than us.
That is the realm of the other type of risk, which is known as AI Safety, or AI Alignment. The really big risk, the one that keeps Elon Musk and many other people up at night, is that AIs will become superintelligent – so smart that we become the second smartest species on the planet. That is a position currently held by chimpanzees. There are fewer than half a million of them, and there are eight billion of us. Their survival depends entirely on decision that we humans make: they have no say in the matter. Fortunately for them, they don’t know that.
We don’t know whether a superintelligence will automatically be conscious, but it doesn’t have to be conscious in order to jeopardise our very existence. It simply needs to have goals which are inconvenient to us. It could generate these goals itself, or we could give it an unfortunate goal by mistake – as King Midas did when he asked his local deity to make everything he touched turn to gold, and discovered that this killed his family and rendered his food inedible.
We’ll come back to superintelligence, because first we should consider a development which is probably not an existential risk to our species, but which could cause appalling disruption if we are unlucky. That is technological unemployment.
People often say that automation has been happening for centuries and it has not caused lasting unemployment. This is correct – otherwise we would all be unemployed today. When a process or a job is automated it becomes more efficient, and this creates wealth, which creates demand, which creates jobs.
Actually it is only correct for humans. In 1915 there were 21.5 million horses working in America, mostly pulling vehicles around. Today the horse population of America is 2 million. If you’ll pardon the pun, that is unbridled technological unemployment.
To say that automation cannot cause lasting widespread unemployment is to say that the past is an entirely reliable guide to the future. Which is silly. If that were true we would not be able to fly. The question is, will machines ever be able to do pretty much everything that humans can do for money. The answer to that question is yes, unless AI stops developing for some reason. Automation will presumably continue to create new demand and new jobs, but it will be machines that carry out those new jobs, not humans. What we don’t know is when this will happen, and how suddenly.
If we are smart, and perhaps a bit lucky, we can turn all this to our advantage, and I explore how in my book The Economic Singularity. We can have a world in which humans do whatever we want to do, and we can have a second renaissance, with all humans living like wealthy retirees, or aristocrats.
What about superintelligence? Can we find a solution to that risk? I see four scenarios – the four Cs.
We stop developing AIs. Everyone in the world complies with a permanent moratorium. It might be possible to impose such a moratorium in the West and in China (which has already told its tech giants to slow down), and currently, almost all advanced AI is developed in these places. But as computational power and data continue to grow, it will become possible for rogue states like Russia and North Korea, for Mafia organisations, and for errant billionaires to create advanced AIs. The idea that all humans could refrain forever from developing advanced AI is implausible.
We figure out how to control entities that are much smarter than us, and getting smarter all the time. We constrain their behaviour forever. There are very smart people working on this project, and I wish them well, but it seems implausible to me. How could an ant control a human?
A superintelligence either dislikes us, fears us, or is indifferent to us, and decides to re-arrange some basic facts about the world – for instance the presence of oxygen in the atmosphere – in a way that is disastrous for us. Some people think this is almost inevitable, but I disagree. We simply don’t know what a superintelligence will think and want. But it is a genuine risk, and it is irresponsible to deny it.
The fourth scenario is that a superintelligence (or superintelligences) emerges and decides that we are harmless, and also of some value. If we are fortunate, it decides to help us to flourish even more than we were already doing. Personally, I think our most hopeful future is to merge with the superintelligence – probably by uploading our minds into its substrate – and take a giant leap in our own evolution. We would no longer be human, but we would live as long as we want to, with unimaginable powers of understanding and enjoyment. Which doesn’t sound too bad to me.