Calum discusses the implications of machine consciousness for a New York-based think tank.

Defining Consciousness: How do we define consciousness, and what criteria would we use to determine if an AI system has achieved it?

It is ironic how little we understand consciousness, since it is actually the only thing any of us know anything about. I am a co-founder of Conscium, a company which seeks to remedy that.

A rough-and-ready definition of consciousness is that it is the experience of experiencing. The philosopher Ned Block wrote a famous paper in 1974 called “What’s it like to be a bat?” Without getting into the arguments about what he was trying to prove in that paper, the title is a nice summary of consciousness. For conscious entities, there is something it is like to be that entity. For non-conscious entities, there isn’t.

In the coming years or decades, we may create conscious machines. It might be the case that consciousness is an inevitable corollary of sufficiently advanced intelligence. In other words, when you reach a certain level of intelligence, consciousness comes along for the ride. Certainly, we seem to believe that consciousness is approximately correlated with intelligence in animals.

We haven’t yet discovered a way to prove that other entities are conscious. I assume that other humans are conscious because they behave in similar ways to me. They respond in similar ways to stimuli like pain and pleasure. I assume that you do the same.

We extend the same approach to animals, and most of us conclude, for instance, that dolphins and some dogs have a high degree of consciousness, while insects have a low degree. Cats, of course, rival humans in both consciousness and intelligence, and in some cases exceed them.

The degree of consciousness that we perceive in animals seems to determine the respect we accord them – even the moral value that we attribute to them. Few people would be troubled by a builder destroying an inhabited anthill that was in the way of a project. Most of us would have more compunction if the ants were cats.

There are dozens of theories purporting to explain consciousness, and some of them yield markers for its presence. Attempts have been made to use some of these markers to determine whether any of today’s cutting edge AIs are consciousness. There is a broad consensus that at the moment, none of them are.

Until and unless there is agreement about these theories, or at least about the markers, there is a hack. The Turing Test is usually regarded as a test for intelligence, but I think it is better viewed as a test for consciousness. In 1950, Alan Turing published a paper called “Computing Machinery and Intelligence.” He suggested adapting a parlour game called “the imitation game” which tests whether a person can successfully imitate someone of a different gender. The original version of the Turing Test has a panel of humans interrogating a machine for a few minutes and then jointly deciding whether it is intelligent.

But we have other, better tests for intelligence. Time for another rough-and-ready definition: intelligence is the ability to learn while pursuing a goal, and adapting your behaviours accordingly. There are many ways to test performance against goals. There are not many ways to test consciousness.

Since 1950, people have suggested deepening Turing’s Test, and having the machine interrogated over a period of days by qualified people. If and when a machine engages in rigorous conversation with a panel of sophisticated human judges for days, and convinces them that it is conscious, we will surely have to admit that it is.

Conscium is building a team of experts from computer science, neuroscience, and philosophy to develop a set of agreed markers for consciousness. We want to develop a consensus about whether humans should develop conscious machines, and how to make the future of AI safe for both humans and machines.

We have assembled an excellent advisory board, including luminaries like Anil Seth, Nick Humphries, and Mark Solms, who have published fascinating books recently (“Being you”, “The Hidden Spring”, and “Sentience” respectively). These books are excellent guides to the knotty issues we are addressing.

Ethical Implications: What are the ethical implications of creating conscious AI, and how can we ensure that it is developed and used responsibly?

We don’t know whether machines can and will become conscious. Some people believe they cannot because they have no god-given soul. Others believe they cannot because their brains have not been forged by evolution. Neither of these arguments is compelling for me. I am agnostic about whether consciousness will arise in machines, but it does seem possible, and also an eventuality that we should prepare for.

If machines become conscious and we either fail to notice, or we refuse to accept it, then we may end up committing what the philosopher Nick Bostrom termed “mind crime”. This is when you imprison, hurt, and kill disembodied minds. Given that we are likely to build billions of AI agents in the coming years and decades, if we commit mind crimes against them it could become the worst atrocity that any humans ever commit.

There are two other reasons to study machine consciousness, to develop markers for it, and to find out how to develop consciousness in machines and also how to avoid developing it.

One is the fact that consciousness is so fascinating. It is arguably the most important thing about us, and yet we understand it so poorly. Understanding machine consciousness should deepen our understanding of our own consciousness.

The other reason is that the consciousness or otherwise of machines could become existentially important for humans. Most AI experts believe that one day we will build machines that are more intelligent than us. This is called superintelligence. If superintelligent machines develop their own beliefs and preferences about the world – and there are good reasons to think they will – then these preferences will prevail over ours.

There is not much difference genetically or in brain size between us and chimpanzees, but because we are more intelligent, we determine their future. If and when machines become superintelligent, they will determine our future.

If they are conscious, they will understand in a profound way what we mean when we say that we are conscious, and that this accords us moral value. If they are not conscious, they will understand what we are saying in an abstract, academic way, but they will not understand it viscerally. Some people – including me – think this means that conscious superintelligence would be safer for humans than a non-conscious variety.

Existential Risk: Does the development of conscious AI pose an existential risk to humanity, and if so, how can we mitigate it?

If and when it happens, the arrival of superintelligence on the Earth will be the most significant event in human history – bar none. It will be the time when we lose control over our future. (You could argue that we have never exercised that control in an organised or a responsible way, but no other species has wielded control over us.)

It is surprising how many people believe they know what the outcome of this will be, and most of them do not think it will be good. They argue that humans require a very particular set of circumstances to prevail in order to flourish – the availability of the right mix of gases in the atmosphere, the availability of food and energy in forms that we can use, etc etc. They argue that superintelligent machines may well want to adjust one or more of these circumstances for their own ends, and that if we are collateral damage, so be it. In the same way that we do not hesitate to destroy inconvenient ant hills.

On the other hand, if superintelligent machines like us, they will probably decide to help us. Their greater intelligence will give them better tools and technologies, and better ways of approaching problems. They will be increasing their own intelligence at a rapid rate, so they will have extraordinary problem-solving abilities. They could resolve pretty much all our current difficulties, including climate change, poverty, war, and even ageing and death.

It does seem likely that the arrival of superintelligence will be a binary event for humanity – either very good or very bad. We are very unlikely to be the species which determines which outcome we get: that will be the superintelligent machines. But it might be that nudging them in the direction of consciousness could improve our odds.

Related Posts