Pedro Domingos

Should AI research be paused?

Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow-down desirable, given that AI can also lead to very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow-down is desirable, is it practical?

Professor Pedro Domingos of the University of Washington is best known for his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World”. It describes five different “tribes” of AI researchers, each with their own paradigms, and it argues that progress towards human-level general intelligence requires a unification of these different approaches – not just a scaling up of deep learning models, or combining them with symbolic AI. (The other three tribes use evolutionary, Bayesian, and analogical algorithms.) Domingos joined the London Futurists Podcast to discuss these questions.

GPTs

Generative Pre-Trained Transformers, or GPTs, are currently demonstrating both the strengths and the weaknesses of deep learning AI systems. They are very good at learning, but they also just make things up. Symbolic learning AIs don’t do this because they are reasoning systems. Some researchers still think that the remarkable abilities of GPTs indicate that there is a “straight shot” from today’s best deep learning systems to artificial general intelligence, or AGI – a system with all the cognitive abilities of an adult human. Domingos doubts this, although he can imagine a deep learning model being augmented with some other types of AI to produce a hybrid system which was widely perceived as an AGI.

In fact, Domingos thinks that even a hybrid system which employed techniques championed by all five of the tribes he describes would still fall short of AGI. Humans can recognise many breeds of dogs as dogs, after seeing a couple of pictures of just one breed. None of the AI tribes has a clear path to achieving that. He thinks that AI is doing what all new scientific disciplines do: it is borrowing techniques from other fields (neuroscience, statistics, evolution, etc.) while it figures out its own, unique techniques. He suspects that AI cannot be a mature field until it has developed its own unique techniques.

Timeline to AGI

Domingos has developed a neat answer to the impossible but unavoidable question of when AGI might arrive: “a hundred years – give or take an order of magnitude”. In other words, anywhere between ten years and a thousand. Progress in science is not linear: we are in a period of rapid progress right now, but such periods are usually separated by periods where relatively little happens. The length of these relatively fallow periods are determined by our own creativity, so Domingos likes the American computer scientist Alan Kay’s dictum that the best way to predict the future is to invent it.

The economic value of AGI would be enormous, and there are many people working on the problem. The chances of success are reduced, however, because almost all of those people are pursuing the same approach, working on large language models. Domingos sees one of his main roles as trying to widen the community’s focus.

Criticising the call for a moratorium

Domingos is vehemently opposed to the call by the Future of Life (FLI) for a six-month moratorium on the development of advanced AI. He has tweeted that “The AI moratorium letter was an April Fools’ joke that came out a few days early due to a glitch.”

He thinks the letter’s writers made a series of mistakes. First, he believes the level of urgency and alarm about existential risk expressed in the letter is completely disproportionate to the capability of current AI systems, which he is adamant are nowhere near to AGI. He can understand lay people making this mistake, but he is shocked and disappointed that genuine AI experts – and the letter has been signed by many of those – would do so.

Secondly, he ridicules the letter’s claims that GPTs will cause civilisation to spin out of control by flooding the internet with misinformation, or by destroying all human jobs in the near term.

Third, he thinks it is a risible idea that a group of AI experts could work with regulators over a six-month period to mitigate threats like these, and ensure that AI is henceforth safe beyond reasonable doubt. We have had the internet for more than half a century, and the web for more than thirty years, and we are far from agreeing how to regulate them. Many people think they cause significant harms as well as great benefits, yet few would argue that they should be shut down, or development work on them paused.

Three camps in the AI pause debate

There are three schools of thought regarding a possible pause on AI development. Domingos is joined by Yann LeCun, Andrew Ng and others in thinking we should not pause, because the threat is not yet great, and the upsides of advanced AI outweigh the threat. The second school is represented by Stuart Russell, Elon Musk and others who are calling for a pause. The third school’s most prominent advocate is Eliezer Yudkowsky, who thinks that AGI may well be near, and that the risk from it is severe. He thinks all further research should be subject to a relentlessly enforced ban until safety can be assured – which he thinks could take a long time.

These camps consist largely of people who are smart and well-intentioned, but unfortunately the debate about FLI’s open letter has become ill-tempered, which probably makes it harder for the participants to understand each other’s point of view. Domingos acknowledges this, but argues that the signatories to the letter have raised the temperature of the debate by making outlandish claims.

In fact he notes that the debate about the open letter is not new. Rather, it is surfacing a long-standing debate between people in and around the AI community, which was already acrimonious.

Stupid AI and bad actors

Domingos thinks another of the mistakes in the letter is that it addresses the wrong problems. Even though he thinks AGI could conceivably arrive within ten years, he thinks it is about as likely that he will get struck by lightning, something he does not worry about at all. He does think it would be worthwhile for some people to be thinking about the existential risk from AGI, but not a majority. He thinks that by the time AGI does arrive, it is likely to be so different from the kinds of AI we have today that such preparatory thinking might turn out to be useless.

Domingos has spent decades trying to inform policy makers and the general public about the real pros and cons of AI, and one of the reasons the FLI letter irritates him is that he fears it is undoing any progress he and others have made.

GPT-4 has read the entire web, so we humans make the mistake of thinking that it is smart, like any human who had read the entire web would be. But in fact it is stupid. And the solution to that stupidity is to make it smarter, not to keep it as stupid as it is today. That way it could make good judgements rather than bad ones about who gets a loan, who goes to jail, and so on.

In addition to its stupidity, the other main short-term risk that Domingos sees from AI is bad actors. Cyber criminals will deploy and develop better AIs regardless what the good actors do, and so will governments which act in bad faith. Arresting the development of AI by the better actors would be like saying that police cars can never improve, even if criminals are driving faster and faster ones.

Control

Domingos thinks that humans will always be able to control the objective function (goal) of an advanced AI, because we write it. It is true that the AI may develop sub-objectives which we don’t control, but we can continuously check the AI’s outputs, and look for constraint violations. He says, “solving AI problems is exponentially hard, but checking the solutions is easy. Therefore powerful AI does not imply loss of control by us humans.” The challenge will be to ensure that control is exercised wisely, and for good purposes.

He speculates that maybe at some point in the future, the full-time job of most humans will be checking that AI systems are continuing to follow their prescribed objective functions.

Related Posts