Not long ago, a Google engineer created a stir in the world of artificial intelligence by claiming that its flagship chatbot was sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Blake Lemoine.
“I know a person when I talk to it,” Lemoine told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine’s views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a “passive intellect”, the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an “agent intellect” uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes “knowledge”.
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.
This leads to a working definition for consciousness: consciousness is the awareness of the cognitive and decision-making processes, including the steps involved in acquiring, evaluating and applying knowledge. A person is said to be aware of their sense of sound, sight, smell etc., aware of their feelings, aware of their imaginations, aware of their judgments, aware of their knowledge, aware of their choices. Consciousness is and can be included in all or any of these steps.
Can AI become conscious?
When we compare the different levels of the human cognitive and decision-making processes to Artificial Intelligence it’s easy to spot big differences.
External experience. Humans experience emotions together with the acquisition of sense knowledge. AI simply acquires data. This emotional component adds to the knowledge of humanity in a way that computers can’t.
Sense images and memories. AI excels in recall and data retrieval, far surpassing human capacity. In this area AI excels, without a doubt.
Agent intellect. Humans actively direct their thoughts and they abstract concepts from the raw sense data. This process is self-directed and autonomous. AI merely reveals patterns of information; it is not self-directed. The pattern is the result of an algorithm which has been programmed by a human. AI activity is prompted first by human inquiry.
Choice and will. Humans make conscious decisions with goals in mind, while AI does not exhibit characteristics of personal choice or intentionality.
AI exhibits behaviors associated with intelligence—memory recall, summarization, pattern recognition, prediction capabilities—but it lacks the element of self-direction which is characteristic of humans.
AI does not generate its own thoughts; it merely responds to its programming and responds to whatever it is prompted to. AI does not experience emotions conjointly as it gathers sense data which is merely installed into the computer.
Sometimes AI does seem to generate novel thoughts, but this is dependent on data that it already possesses and is the result of a learned pattern. Humans can reflect on their thinking. This allows them to correct themselves without external prompts. Humans can develop concepts that are not dependent on sense data.
In short, AI merely simulates human cognitive and volitional activities. This means it is not conscious.
Final Thoughts
Proponents of AI consciousness often fail to define consciousness adequately before making claims of AI consciousness. From a Thomistic perspective, human consciousness is multifaceted, involving perception, intellect, will, and self-direction.
To my mind, the most significant difference is found in decisionality. AI does not make the personal decisions which are a clear indication of consciousness. AI, while powerful in data processing, does not exhibit those core attributes that define human consciousness.
When I ask an AI chatbot a question and it states that it has other things to do and will answer tomorrow, then I will revisit the question.