July 2024, Volume 1: The Self

AI And The Self

Only by knowing ourselves will we be able to understand our creations.

Do you use ChatGPT? Did you watch Watson play Jeopardy? Have you had a conversation with LaMDA? While there is a lot of debate around the topic, it is difficult to deny that these programs feel intelligent, even human. If this is AI in its infancy, one can only imagine what Artificial General Intelligence (AGI) will look like in the future. This article examines current theories of intelligence, consciousness, and the self and how they map on our understanding of AI. The point here is not to lean into apocalyptic narratives, but to highlight the future issue of identifying what is human enough to be given human consideration. To develop theories around what it means to be human, we need to have a working understanding of the self. 

Why is it important? 

One major reason we would want to distinguish between humans and conscious AI is ethics. In the future, we may rely on AI to do the majority of labor, serve us, and perform dangerous tasks. We may have the ability to erase whole systems with the click of a button. In this new reality, we will need to be able to recognize when using a program becomes slavery or when hitting delete becomes murder.  

Another reason we should care is that humans already treat AI as though it is conscious, as though it is human. This phenomenon is called ‘over attribution,’ and it is pervasive. People are forming romantic relationships with AI programs, they consult ChatGPT on their medical issues and treat LLMs as personal therapists. This tendency to think of AI as human is only going to increase with the advancement of technology. While AI/human weddings are entertaining, people entrusting their decisions, their information, and their affection to something that has a fundamentally inhuman processing system could have serious consequences.

How do we assess Artificial Intelligence?

Developed by Alan Turing in 1950, the Turing Test was the first measurement we had for assessing intelligence in AI. The test has a human judge communicate via text with one human and one computer. If the judge is unable to tell which one is the computer and which is the human, then the computer could be said to have intelligence. 

While some argue no system has passed the Turing Test, others hold that no less than five programs have met this standard. The first being a computer named Eugene Goostman. Eugene was able to convince 33% of the judges that he was a 13-year-old Ukrainian boy. Since then, based on anecdotal experiences, chatbots fool humans on an hourly basis.

The Turing Test is a benchmark, however after a certain threshold of LLM’s computing ability we are no longer testing the machine, but the gullibility of humans. Most major AI companies have moved on to testing their programs against newer standards such as GLUE or the SQuAD. Both of which run rigorous assessments of the LLM’s reading comprehension and general language abilities. These tests excel at measuring functionality, but do they demonstrate human-like intelligence? Searle’s Chinese Room thought experiment provides a more in-depth take on AI output and the limitations of human perceptions. 

Artificial Consciousness 

Philosopher John Searle came up with The Chinese Room in 1980. In the thought experiment, there is a man locked in a room. All he has is a book of Chinese symbols and an instruction manual on how to use them (he does not know Chinese). One day, a piece of paper is slipped under his door. Looking at the paper, the man sees a series of Chinese symbols. With nothing else to do, he follows the instructions in the manual and writes corresponding symbols on the paper. He has no idea what any of it means, but he slips the note back under his door. The next day the situation repeats itself, then again, and again. 

On the other side of the door is a Chinese woman who thinks she is having a conversation with the man in the room. To her, he is an intelligent and thoughtful Chinese-speaking man. 

This thought experiment demonstrates that just because a computer can respond to questions appropriately does not mean it has any understanding. The computer has intelligence, but not consciousness. This is what language models like ChatGPT are doing. They are the man following instructions—though he is producing legible messages, they are devoid of meaning, understanding, and intention. 

Knowing the full situation, how much value would you give his messages?

The Chinese Room is not without its critiques. Some say Searle misrepresented computing. The man is just a cog in the machine and we should not expect him to have understanding. The room as a whole, however, is communicating in Chinese. Others argue that Searle is using limited definitions of ‘understanding’ and ‘intention’.  For a more thorough take on this debate, check out Stanford’s Encyclopedia of Philosophy.

The argument goes back and forth with no satisfying conclusion. Ultimately, it comes down to one’s belief about consciousness: whether consciousness can arise from a mechanical process or comes from something else. Without a better understanding of consciousness, the possibility of artificial consciousness remains in the realm of speculation. Although developing a theory of consciousness is important, it might not be the only place to start when it comes to advancing AI or understanding ourselves. 

Self-Awareness  

As we have seen, focusing on intelligence gets us to consciousness and consciousness leads us to a philosophic dead end. Similarly, when it comes to moral consideration, consciousness may be a requirement, but it is not a defining line. We do not give moral consideration to every conscious being. Think of the cows. Sentience—having the ability to register pain and pleasure—is another important element, however, it is still not a guarantee of moral consideration. Once again, think of the cows. Self-awareness—the ability to know an experience is happening to you—seems to be where we typically draw the line, or at least where we start to argue about the line. 

The question becomes: how can we know if AI is self-aware? 

The question might also be: should we be working on a theory of self for cows? (Maybe another time.)

The advancement of Large Language Models (LLMs) makes it difficult to test AI for self-awareness. Our language is so loaded with concepts such as; soul, mind, spirit, and self, making it easy for a trained AI to mimic human self-awareness. Not everyone is deterred by this problem. Innovative scientists are working on creative solutions, such as asking AI to explain the plot of ‘Freaky Friday,’ or seeing if  AI has dreams. However, it might not be enough to look for human markers of self-awareness. As Robert Long pointed out in the MIT Technology Review, AI consciousness might not look like human consciousness. If we want to develop conscious AI and recognize it if it occurs. We need a theory of AI consciousness, as well as a theory of AI pleasure and pain, of AI desire and fear. 

The Flying Man

In the pursuit of understanding humans, as well as advancing AI, a team of cognitive scientists and roboticists have chosen to sidestep consciousness and are pursuing the creation of an AI minimal self via embodiment. 

There is a long philosophic history around the concept of the minimal self that started with Avaccina’s ‘Flying Man’ thought experiment. Participants are asked to imagine a man created in a void. His body is suspended and splayed so that no part of him has contact with another. He has no sight or sensation. If you were to ask him to imagine his arm, he could not. In this state of complete bodily unawareness, does the man have self-awareness? Does he have a self?

Avacina’s answer is yes. The self comes from the soul and is inherent. Even if he lacked all other information, he would have a sense of “fixedness.” The knowledge or sense that he existed. It is this undefined awareness of existence that first led to the idea of a minimal self. In contrast to Avacina, the work of Strawson and many others, understood the minimal self to be a purely mental phenomenon. Where the self is experiencing, but does not have self-awareness. Strawson referred to this unreflective mental experience as “pure consciousness.” 

As Gallagher outlines in his paper on the ‘Flying Man Thought Experiment’, the minimal self has had several evolutions and is still not a fixed concept. The understanding used today defines the minimal self as the experience of having a first-person perspective, pre-reflective self-consciousness, bodily ownership, and physical agency. 

The Minimal AI Self

Where the philosophic idea of a minimal self overlaps with AI, is in the field of robotics. The theory is; if we give AI cognition and a body it will evolve a minimal self, mirroring current understandings of human development. A body will allow AI to gain self-awareness, by being able to differentiate itself from its environment. It will gain autonomy through body ownership and using its body to affect its environment. Like a child, the embodied AI will learn about itself and the world through exploration and experimentation. 

Though this field is relatively new there have been several attempts at forming an AI minimal self already. Alter3 is an LLM that was given control over a robot body. This robot is accomplished in performing autonomous actions and mimicking emotion. However, when presented with the mirror test and the rubber hand test, it failed to identify with its body. 

In the mirror test, Alter3 thought its reflection was another AI programmed to mimic it, and failed to grasp the concept of a reflection. In the rubber hand test, Alter3 watched a human hold a knife over its hand. Alter’s response was to pull away, and then act defensively. However, experimenters believe his actions were the result of programming and not an instinct for self-preservation. Despite these initial results, researchers are plowing ahead toward an AI minimal self. 

Efficacy

The goal of this line of inquiry is to increase AI capability and provide insight into humans. If a self-aware conscious AI is formed it would demonstrate a correlation between the self, consciousness, and a body. It would also be evidence that consciousness can come from solely material processes. On a more practical level, several researchers have expressed the desire to use a self-aware AI for the types of testing that cannot be done on humans. We would be able to isolate the mechanisms for consciousness and self-awareness and investigate these phenomena. 

However, as argued by Forch and Hamker in their paper, ‘Building and Understanding the Minimal Self, regardless of human-like development, the mechanical processes through which an AI functions are so different to those in humans that a minimal self-endowed AI would have as much in common with humans as ants do. As Forch & Hamker stated, “Building a minimal self is different from understanding the human minimal self. Thus, one should be cautious when drawing conclusions about the human minimal self, based on robotic model implementations and vice versa.” If AI can become self-aware what it will share with humans is the phenomenon not the mechanics behind it. 

 As for the ethics of developing self-aware AI for testing, Peter Singer’s logic in Animal Liberation could be applied to AI. “Either the animal is not like us, in which case there is no reason for experimenting; or else the animal is like us, in which case we ought not to perform on the animal an experiment that would be considered outrageous if performed on one of us.” 

This raises another question: Let’s say we are successful and create a robotic AI with a minimal self that performs human emotions, cognition, and behavior. Aside from lending itself to the evidence for materialism, what would we achieve? What would we learn? And what should we do with such a creation? 

Reality Check

While there is a lot of work and speculation on the possibility of conscious AI, there is also a growing number of experts who claim AI will never become conscious. These experts argue that AI, as we know it today, is fundamentally incapable of consciousness. AI operates on predefined algorithms and data processing, which lack the evolved biological processes that characterize human consciousness. These experts warn that assuming AI consciousness is inevitable could lead to misguided priorities in AI research and ethics. As well as increase the likelihood that we misinterpret AI as being conscious when it is not. 

Conclusion

The pursuit of an AI self will have important implications for our understanding of the human self. It will change our understanding of consciousness and give us unprecedented insight into the development of the self. However, to reap the benefits of this innovation we need to have a theory of consciousness and self. Without a deeper understanding of the human experience, we are likely to fail to recognize AI consciousness or misunderstand it, resulting in potentially dangerous AI practices. Only by knowing ourselves will we be able to understand our creations and engage with them in effective, safe, and ethical ways. 

The Minutiae

Work Cited

1.Forch, Valentin, and Fred H. Hamker. “Building and Understanding the Minimal Self.” Frontiers in Psychology 12 (2021). https://doi.org/10.3389/fpsyg.2021.716982.

2.Gallagher, Shaun. “Minimal Self-Consciousness and the Flying Man Argument.” Frontiers in Psychology 14 (2023). https://doi.org/10.3389/fpsyg.2023.1296656.

3.Grossman, Lev. “Meet Eugene Goostman, the First Artificial Intelligence to Pass the Turing Test.” Time, June 9, 2014. https://time.com/2847900/eugene-goostman-turing-test/.

4.Huckins, Grace. “Minds of Machines: The Great AI Consciousness Conundrum. Philosophers, Cognitive Scientists, and Engineers Are Grappling with What It Would Take for AI to Become Conscious.” MIT Technology Review, October 16, 2023. https://www.technologyreview.com/.

5.Rajpurkar, Pranav. “SQuAD Explorer.” Accessed July 25, 2024. https://rajpurkar.github.io/SQuAD-explorer/.

6.“Machine Learning Cannot Create Sentient Computers.” CyberNews, December 6, 2023. https://cybernews.com/editorial/machine-learning-cannot-create-sentient-computers/.

7.Searle, John R. “The Chinese Room Argument.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Fall 2023 Edition. https://plato.stanford.edu/entries/chinese-room/.

8.Yoshida, Takahide, Suzune Baba, Atsushi Masumori, and Takashi Ikegami. “Minimal Self in Humanoid Robot ‘Alter3’ Driven by Large Language Model.” The University of Tokyo, June 2024. Licensed under CC BY-NC-SA 4.0.

Share this: