Superintelligence - a balanced approach

A couple of recent events made me think it would be good to post a brief but (hopefully) balanced summary of the discussion about superintelligence.

Can we create superintelligence, and if so, when?

Brain on motherboard

Our brains are existence proof that ordinary matter organised the right way can generate general intelligence – an intelligence which can apply itself to any domain. They were created by evolution, which is slow, messy and inefficient. It is also un-directed, although non-random. We are now employing the powerful, fast and purposeful method of science to organise different types of ordinary matter to achieve the same result.

Today’s artificial intelligence (AI) systems are narrow AIs: they can excel in one domain (like arithmetic calculations, playing chess, etc) but they cannot solve problems in new domains. If and when we create an AI which has all the cognitive ability of an adult human, we will have created an artificial general intelligence (AGI).

Although the great majority of AI research is not specifically targeted at creating an AGI, some of it is. For instance, creating an AGI is an avowed aim of DeepMind, which is probably the most impressive team of AI researchers on the planet. Furthermore, many other AI researchers will contribute more or less inadvertently to the development of the first AGI.

We do not know for sure that we will develop AGI, but the arguments that it is impossible are not widely accepted. Much stronger are the arguments that the project will not be successful for centuries, or even thousands of years. There are plenty of experts on both sides of that debate, However, it is at least plausible that AGI will arrive within the lifetime of people alive today. (Let’s agree to leave aside the separate question about whether it will be conscious.)

We do not know for sure that the first AGI will become a superintelligence, or how long that process would take. There are good reasons to believe that it will happen, and that the time from AGI to superintelligence will be much shorter than the time from here to AGI. Again there is no shortage of proponents on both sides of that debate.

I am neither a neuroscientist nor a computer scientist, and I have no privileged knowledge. But having listened to the arguments and thought about it a great deal for several decades, my best guess is that the first AGI will arrive in the second half of this century, in the lifetime of people already born, and that it will become a superintelligence within weeks or months rather than years.

Will we like it?

This is the point where we descend into the rabbit hole. If and when the first superintelligence arrives on Earth, humanity’s future becomes either wondrous or dreadful. If the superintelligence is well-disposed towards us it may be able to solve all our physical, mental, social and political problems. (Perhaps they would be promptly replaced by new problems, but the situation should still be an enormous improvement on today.) It will advance our technology unimaginably, and who knows, it might even resolve some of the basic philosophical questions such as “what is truth?” and “what is meaning?”

ASI

Within a few years of the arrival of a “friendly” superintelligence, humans would probably change almost beyond recognition, either uploading their minds into computers and merging with the superintelligence, or enhancing their physical bodies in ways which would make Marvel superheroes jealous.

On the other hand, if the superintelligence is indifferent or hostile towards us, our prospects could be extremely bleak. Extinction would not be the worst possible outcome.

None of the arguments advanced by those who think the arrival of superintelligence will be inevitably good or inevitably bad are convincing. Other things being equal, the probability of negative outcomes is greater than the probability of positive outcomes: humans require very specific environmental conditions, like the presence of exactly the right atmospheric gases, light, gravity, radiation, etc. But that does not mean we would necessarily get a negative outcome: we might get lucky, or a bias towards positive outcomes on this particular issue might be hard-wired into the universe for some reason.

What it does mean is that we should at least review our options and consider taking some kind of action to influence the outcome.

No stopping

steamroller 2

There are good reasons to believe that we cannot stop the progress of artificial intelligence towards AGI and then superintelligence: “relinquishment” will not work. We cannot discriminate in advance between research that we should stop and research that we should permit, and issuing a blanket ban on any research which might conceivably lead to AGI would cause immense harm – if it could be enforced.

And it almost certainly could not be enforced. The competitive advantage to any company, government or military organisation of owning a superior AI is too great. Bear in mind too that while the cost of computing power required by cutting-edge AI is huge now, it is shrinking every year. If Moore’s Law continues for as long as Intel thinks it will, today’s state-of-the-art AI will soon come within the reach of fairly modest laboratories. Even if there was an astonishing display of global collective self-restraint by all the world’s governments, armies and corporations, when the technology falls within reach of affluent hobbyists (and a few years later on the desktops of school children) surely all bets are off.

There is a danger that, confronted with the existential threat, individual people and possibly whole cultures may refuse to confront the problem head-on, surrendering instead to despair, or taking refuge in ill-considered rapture. We are unlikely to see this happen on a large scale for some time yet, as the arrival of the first superintelligence is probably a few decades away. But it is something to watch out for, as these reactions are likely to engender highly irrational behaviour. Influential memes and ideologies may spread and take root which call for extreme action – or inaction.

At least one AI researcher has already received death threats.

Rather clever mammals

Astronaut
We are an ingenious species, although our range of comparisons is narrow: we know we are the smartest species on this planet, but we don’t know how smart we are in a wider galactic or universal setting because we haven’t met any of the other intelligent inhabitants yet – if there are any.

The Friendly AI problem is not the first difficult challenge humanity has faced. We have solved many problems which seemed intractable when first encountered, and many of the achievements of our technology that 21st century people take for granted would seem miraculous to people born a few centuries earlier.

We have already survived (so far) one previous existential threat. Ever since the nuclear arsenals of the US and the Soviet Union reached critical mass in the early 1960s we have been living with the possibility that all-out nuclear war might eliminate our species – along with most others.

Most people are aware that the world came close to annihilation during the Cuban missile crisis in 1962; fewer know that we have also come close to a similar fate another four times since then, in 1979, 1980, 1983 and 1995i. In 1962 and 1983 we were saved by individual Soviet military officers who decided not to follow prescribed procedure. Today, while the world hangs on every utterance of Justin Bieber and the Kardashian family, relatively few of us even know the names of Vasili Arkhipov and Stanislav Petrov, two men who quite literally saved the world.

Perhaps this survival illustrates our ingenuity. There was an ingenious logic in the repellent but effective doctrine of mutually assured destruction (MAD). More likely we have simply been lucky.

We have time to rise to the challenge of superintelligence – probably a few decades. However, it would be unwise to rely on that period of grace: a sudden breakthrough in machine learning or cognitive neuroscience could telescope the timing dramatically, and it is worth bearing in mind the powerful effect of exponential growth in the computing resource which underpins AI research and a lot of research in other fields too.

It’s time to talk

Bubbles

What we need now is a serious, reasoned debate about superintelligence – a debate which avoids the twin perils of complacency and despair.

We do not know for certain that building an AGI is possible, or that it is possible within a few decades rather than within centuries or millennia. We also do not know for certain that AGI will lead to superintelligence, and we do not know how a superintelligence will be disposed towards us. There is a curious argument doing the rounds which claims that only people actively engaged in artificial intelligence research are entitled to have an opinion about these questions. Some go so far as to suggest that people like Elon Musk are not qualified to comment. This is nonsense: it is certainly worth listening carefully to what the technical experts think, but AI is too important a subject for the rest of us to shrug our shoulders and abrogate all involvement.

We have seen that there are good arguments to take seriously the idea that AGI is possible within the lifetimes of people alive today, and that it could represent an existential threat. It would be complacent folly to ignore this problem, or to think that we can simply switch the machine off if it looks like becoming a threat. It would also be Panglossian to believe that a superintelligence will necessarily be beneficial because its greater intelligence will make it more civilised.

Equally, we must avoid falling into despair, felled by the evident difficulty of the Friendly AI challenge. It is a hard problem, but it is one that we can and must solve. We will solve it by applying our best minds to it, backed up by adequate resources. The establishment of existential risk organisations like the Future of Humanity Institute in Oxford is an excellent development.

To assign adequate resources to the project and attract the best minds we will need a widespread understanding of its importance, and that will only come if many more people start talking and thinking about superintelligence. After all, if we take the FAI challenge seriously and it turns out that AGI is not possible for centuries, what would we have lost? The investment we need at the moment is not huge. You might think that we should be spending any such money on tackling global poverty or climate change instead. These are of course worthy causes, but their solutions require vastly larger sums, and they are not existential threats.

Surviving AI

Surviving cover, compressed
If artificial intelligence begets superintelligence it will present humanity with an extraordinary challenge – and we must succeed. The prize for success is a wondrous future, and the penalty for failure (which could be the result of a single false step) may be catastrophe.

Optimism, like pessimism, is a bias, and to be avoided. But summoning the determination to rise to a challenge and succeed is a virtue.

http://www.pbs.org/wgbh/nova/military/nuclear-false-alarms.html

Related Posts