First published in Forbes on 8 October 2025

A test for consciousness, not intelligence

This month is the 75th anniversary of the Turing Test, which Alan Turing introduced to the world in his paper, “Computing Machinery and Intelligence”, published in the October 1950 issue of the journal “Mind”. Today, the Turing Test is often derided as an inadequate test for intelligence, which machines have already passed without getting anywhere near human-level intelligence. I will argue that on the contrary, the Turing Test could soon become more important than ever, because it is best thought of as a test for consciousness, not for intelligence, and we are sorely in need of tests for artificial consciousness.

This interpretation of the Turing Test is not the consensus view among cognitive scientists and philosophers, although it has been mooted before. Daniel Dennett thought that a machine which passed the Turing Test would be conscious as well as intelligent, and John Searle described it as an inadequate test for both.

Thinking about a parlour game

Turing himself was ambiguous about what his test was for, talking about “thinking” rather than about intelligence or consciousness. The word “intelligence” appears in the title of his paper, but it appears only twice after that. The word “consciousness” appears seven times, but he uses the word “think” and its cognates “thinking” and “thought” no fewer than 67 times.

Turing starts the paper by saying,

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’”

But instead of providing those definitions, he asks whether a machine could fool an observer into believing that it is human, based on a parlour game that he invents. In this game, a human examiner attempts to discern the gender of a person who they cannot see or hear, but who they can communicate with via written notes, or via a third party. Adapting his game as a test for a machine, Turing argues that if, in the course of a five-minute conversation, a machine can persuade a human examiner that it is thinking, then we must accept that it is thinking.

Intelligence

Intelligence and consciousness are both hard to define. They are obviously related, and they seem to be correlated to some degree – but they are very different. To simplify, intelligence is about solving problems. There are many types of intelligence. In 1983 a psychologist called Howard Gardner listed seven of them, including linguistic, logical-mathematical, spatial, and interpersonal intelligence. He later expanded the list to nine. Surprisingly, for such a complex, nuanced, and mysterious concept, there is a pretty good four-word definition. It was proposed in 1985 by another American psychologist, Robert Sternberg, and it is “goal-directed, adaptive behaviour”.

We have plenty of tests for intelligence. None of them satisfies everybody, because it is fiendishly hard to separate the signal of raw intelligence from the noise of environmental factors. The first widely-used test for intelligence was developed in France in 1905 by Alfred Binet and Théodore Simon. Seven years later, a German named William Stern coined the term IQ, or intelligence quotient. IQ tests are often criticised for favouring subjects who share the same language or culture as the developers of the tests.

Today, of course, we have formidably capable artificial intelligence (AI). The community of people developing AI systems uses numerous yardsticks to determine which model is currently the most intelligent, and these tests are continually being made harder as the models become better.

So we have many tests for intelligence, and although none of them are perfect, they do the job acceptably for many purposes. Our tests for consciousness are far less satisfactory.

Consciousness

Consciousness is not about solving problems, but about experiencing. Sense impressions such as the colour red, sounds, tastes, smells, and the texture of touch can only be experienced by conscious entities. So too with private thoughts, emotions like contentment, joy, anxiety, and despair, and affective sensory experiences like pain, and bodily pleasure.

Consciousness is not only hard to measure, it is hard to detect. It is wholly private and subjective: we cannot share our consciousness with each other, and we cannot directly observe any consciousness except our own. And yet it is the only thing we truly know. Our brains are prisoners inside boxes made of bone, with no direct access to the outside world. The images in your mind are generated by your brain, which filters and interprets the electrical signals transmitted to it by your sense organs. Every perception that you have is indirect, and the only direct experience you have is of your own consciousness. As Rene Descartes observed in 1637, you can doubt absolutely everything except the fact that you are doubting.

So how do we test for consciousness in other beings, including each other? We look for behaviour which mirrors our own – behaviour which seems to be motivated by an inner life. We look for signs of pleasure and pain, delight and revulsion, emotions and perceptions that are absent in clocks and cars.

Dogs chase sticks because it is fun, not because they are pursuing food, shelter, or sex. Play can be good training for skills that are needed to survive, but it is also an end in its own right. Chimps get angry and jealous, and they can also be protective. These sophisticated, nuanced behaviours suggest an inner life.

Artificial consciousness

Many computer scientists and cognitive scientists believe that in the coming years or decades, an AI may be developed that is sentient, in the sense that it has conscious experiences, and has positive and negative feelings about those experiences. If and when this happens, it will be momentous. Not only will we learn a great deal about our own consciousness, but we will have created an entity – and shortly afterwards, probably millions of entities – which feel pleasure and pain, and which are deserving of rights. We will have created moral patients.

It is vital that we do not do this without noticing that we have done it. Imagine creating millions of conscious entities, and failing to notice, or care, that they are happy or sad about what we do to them. Imagine failing to notice or care that they are horrified when we switch them off. This could be the worst crime that humanity ever commits.

The Turing Test is one of the few mechanisms we have to test for consciousness. Not the simple five-minute test that Turing proposed, but a test run over days by a diverse panel of humans, including experts in computer science, psychology, and philosophy. If a machine convinces such a panel that it is conscious, then we will have to accept that it is. It’s an imperfect test, but it’s one of the few we have. And we may need it soon.

Related Posts