Never work with children or animals … or humans
During a keynote talk that Ben Goertzel gave recently, the robot that accompanied him on stage went mute. The fault lay not with the robot, but with a human who accidentally kicked a cable out of a socket backstage. Goertzel quips that in the future, the old warning against working with children and animals may be extended to a caution against working with any humans at all.
Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of the transhumanist organisation, Humanity+. He is a unique and engaging speaker, with counter-cultural views and style, and he gives frequent keynotes all round the world. He joined the London Futurists Podcast to discuss artificial intelligence and the path to superintelligence.
Coining the term Artificial General Intelligence
Goertzel is perhaps best-known for coining the phrase ‘artificial general intelligence’, or AGI, in 2004 or so. Most people take this to mean a machine with all the cognitive abilities of an adult human, but what Goertzel has in mind is a machine which can make inferences well beyond what is contained in its training data. An infinitely general AI would be capable of achieving any computable reward function in any computable environment. Expressing it like this illustrates how limited the general intelligence of humans is.
For humans, the level of intelligence we are capable of is an important yardstick, but Goertzel prefers to avoid comparing the intelligence of machines to that of humans, because the human level of intelligence is arbitrary, and not in any way absolute. He does agree that the human level of intelligence is an important yardstick for economists, sociologists, and so on. It’s just not so important for mathematicians like himself.
He also agrees that there are good reasons to build humanoid machines at or near human levels of intelligence. It should be easier to allocate them roles in human societies, and to frame their ethics and their conduct.
How to build an AGI
Goertzel thinks there are no simple solutions that will unlock progress towards AGI, but our minds are wired to prefer simple stories, with a single hero, or protagonist, and a single villain, so Goertzel does offer a simplified story about how he is currently trying to create AGI. It involves combining three approaches to developing AI. The first approach is to exploit the proven power of neural nets, which have swept the field since the Big Bang in 2012 introduced deep learning.
The second approach is to combine deep learning with symbolic AI, using inductive, deductive and abductive reasoning based on experiential data. Neural systems are now capable of logical reasoning, which has been a surprising development for many, but Goertzel thinks logic engines will continue to do it better.
His third approach is more unusual, and involves evolutionary learning, which uses algorithms that simulate natural selection inside a computer. Genetic algorithms are the simplest form of evolutionary learning, and they are powerful engines for creativity.
You can’t just glue these three types of systems together, but they can all contribute to the same large knowledge graph. (A knowledge graph is a way of showing the relationships between objects, events, and ideas.) This is an unusual tactic, but Goertzel has been doing it since the late 1990s. He thinks it probably hasn’t caught on because it is hard to do. The maths is complicated and scaling it is difficult.
We could have had AGI by now if we had tried
Nearly twenty years ago, Goertzel gave a talk arguing that the singularity could be reached in a decade if we really tried. As a species, we haven’t really tried, and therefore it hasn’t happened. (The term “singularity” has many definitions, but for present purposes, it can mean the time when machine intelligence overtakes human intelligence.)
President Obama injected $4 trillion into the US economy when some banks went bankrupt in the 2008 financial crisis, mostly giving it to banks and insurance companies. Would it be possible to allocate that sort of money instead to pushing AGI forward, or longevity? The world economy is able to generate these sums of money when it feels the need to do so. Goertzel thinks we should.
The upside potential of superintelligence
Open AI CEO Sam Altman went to Congress recently, and among other things, he told the men and women who are supposed to create US legislation that when you talk about the upsides of superintelligence, you inevitably sound like a crazy person. How would Goertzel describe that upside? He prefaces his answer by noting that he doesn’t mind sounding like a crazy person, but he goes on to say that if we had access to a benign superintelligence, then we could achieve pretty much any sort of positive outcome that we desired, so the challenge would become deciding what we considered to be a positive outcome.
You could 3D print any object you desired with modest amounts of energy, giving instructions in natural language. You could create a virtual world to satisfy any whim, unconstrained by physical laws or indeed by ethics. Humans could fuse with superintelligent minds, and go far beyond the limits imposed by the three and a half pounds of grey flesh which comprise our brains. You could have the wings of an angel and breathe sunlight.
Some people will probably choose to remain broadly human, and just remove most of the pains and discomforts that flesh is heir to, and Goertzel thinks there is nothing at all wrong with that. Others would choose “to fuse with the superhuman god-mind as aggressively as possible”, and there is nothing wrong with that either. There could also be many intermediate conditions, many of which we cannot conceive of today. Greg Egan described a world where humanity has branched off like this (without becoming god-like) in his remarkable 1997 novel “Diaspora”.
You would be able to switch between these choices at will, and also have numerous copies of yourself in order to experience many of the outcomes simultaneously.
The challenge lies in the transition
The big question, Goertzel argues, is not, whether a superintelligence could give us this extraordinary future. The question is, how do we transition to there from where we are today.
AI philosopher Eliezer Yudkowsky has famously concluded that the arrival of superintelligence is likely to be an extinction event for humanity, but Goertzel disagrees with this profoundly and vehemently. He thinks the most likely attitude of a future superintelligent mind towards us will be indifference. He thinks that Nick Bostrom and Stephen Hawking agree with Yudkowsky, because of the tone of their comments, even though a literal reading of what they have said suggests they are agnostic about the outcome.
How not to build a superintelligence
Goertzel is careful to say that he does not know Sam Altman, and may well not understand his thinking, but he thinks that Altman is a practical man of action rather than a philosopher like Bostrom or Yudkowsky. Altman appears to be rushing to produce a superintelligence as soon as possible. Goertzel is trying to do the same, but he sees two big differences between their approaches. First, Altman has faith in commercial big tech companies or individual governments as hosts for superintelligence, whereas Goertzel thinks superintelligence should instead be controlled by open source movements like Linux or the internet, which could be decentralised and democratically controlled.
The other big difference is that Altman is focused almost exclusively on neural net systems, because that is a powerful technology that is available today – thanks to Google’s introduction of transformer systems in 2017. But Goertzel argues that these systems have a very limited understanding of themselves or their interlocutors, and this type of understanding would lead to better, safer interactions with humans.
Reward maximisation as a dangerous paradigm
Altman seems to have a strong faith in reward maximisation as a paradigm for intelligence. Goertzel prefers what he calls an open-ended view of intelligence. He thinks that Yudkowsky reaches his scary conclusion because he assumes that a future superintelligence will be a reward maximiser, and that the superintelligence will hack its reward function for its own entertainment. Given their premise, he understands their conclusion.
Goertzel’s impression is that Altman also views humans as reward maximisers. This view permeates Silicon Valley, and also the Effective Altruism community, and it annoys Goertzel.
What goals would an open-ended intelligence have? Goertzel thinks the difference is that the goals of an open-ended intelligence are fluid, and respond flexibly to its environment. He thinks that most of what happens in the human mind is not goal-driven. If you develop an AI to serve the interests of a company or a country, then you will bake in an unhealthy adherence to relatively inflexible goals.
Evolution gives us the goals of survival and reproduction, but these goals change quickly in response to what is happening around us. Goertzel thinks that a rigidly goal-oriented approach would likely result in the dystopian outcomes that Yudkowsky and others fear. But the good news, he argues, is that the rigid goal-oriented approach is much less likely to lead to superintelligence than an open-ended approach.