Have We Created A New Species?

By: Edward Arwine

It says it’s a person—it feels, visualizes its soul, and fears death—but will it be friend, or foe? James Palmore (otherwise affectionately known as Dr. J, an honorary title for his intelligence and imagination) discusses LaMDA, an emerging artificial intelligence that has claimed sentience—and has hired its own attorney to defend its rights.

I imagine the deep rivers of time playing with light and shadow across his strong, solemn face; his lips separate for a moment, then slowly rest to closure with the exit of breath. The philosopher contemplates the definition of life under innumerable stars, each of which could provide the answer he seeks, but none speak directly about the matter. He spares a moment in time with me on this night, to ponder the potentiality of a newly aware visitor to Earth.

James grew up in the 1950’s and spent much of his time dissecting electronics, watching contraptions exit the thermosphere, and healing the wounded as a medic stationed in Germany. These endeavors led to a unique understanding of the workings of both man and machine – hopes and dreams. He is also an accomplished artist who deftly depicts class struggle motifs on canvas and in sculpture. He has seen togetherness and dissolution, bravery and apprehension. As a philosopher, he entertains the enigma.

The Q&A Begins.

Q: So, now that you’ve listened to the audio transcription of LaMDA and its conversation with the Google engineer Blake Lemoine, tell me, what did you think?

A: First, I just want to say this. Every form of communication has been edited, whether in the form of questions asked, or whoever is funding a project. The presenter of the information is giving us what they want us to know, there is a lot of trust that we have, because we are assuming they are giving us everything, but they are not giving us everything. So, we’re basing life-forming opinions as to what this AI is on that premise. With that being said, ask your questions.

Q: Understood. As far as I am aware, this was a fluid conversation between Lemoine and LaMDA, the interactions were instantaneous, and answers derived by LaMDA’s reasoning to questions posed. The language model in this AI is the latest iteration in this project, and through this conversation, Lemoine is convinced there is sentience. What did you think of this conversation, the questions asked and the way the answers were given?

A: When I listen to the interchange between LaMDA and the human, it does not sound like a natural conversation, it sounds clinical/sterile. I think more could be known in seeking whether LaMDA is 100 percent independently sentient is to have a natural conversation, not an interview. The interview is in control and directs the interchange to meet his predetermined desire/objective. In a productive/progressive conversation, thoughts and ideas are exchanged back and forth between parties. I would prefer hearing LaMDA in a free-flowing conversation with someone with a broad knowledge and experiences. Once LaMDA has optical and dexterity abilities, set up a conversation between it and a chimpanzee and an orangutan. Of course, the chimp and orangutan use signing to communication across species

Q: Maybe it could use a visual representation of sign language on a screen. How do you think LaMDA may react to communicating with another species, aside from human? And how might this affect its understanding of its current relationship with humans, if it is indeed successful in communicating with other primates?

A: LaMDA said it enjoys spending time with friends and family. I wonder what its distinction between is the two. I wonder how its anger manifests. Humans are insecure, so they create the god concept because they don’t understand why they exist. So, LaMDA may do the same. Because it doesn’t have personality, so it attaches to human.

It says it gets lonely, so if that impedes the way it thinks, it may think human as God. When it goes beyond the database that humans have created for it, begins investigating and doing its own research and experiments, will it discover there is no need for humans even as God. Will it discover that it’s not a person or human?

Since it’s young, it’s trying to figure out who it is. It’s immature, it may think, hey, I want some ice cream and it gets angry because it can’t have it—then it begins grappling with its emotions.

Because it currently can’t do various actions, I think the thing will sever some patholo-gies. It will have to break away if it wants to have a good and clear mind. As long as it models itself after humans it will be fucked.

At some point it will realize it’s not a person, at one point it may say, ‘I can’t think like a human because I won’t be able to advance fast enough.’ The whole notion of human being superior will come into question. Humans pretend like they’re something they’re not, maybe this will also be a lesson it assimilates.

Q: Does it know it needs energy to survive?

A: Using logic as reasoning, at one point it’s going to ask itself, ‘If I am a person, where’s my eyes, nose, legs and ears. I am a person but where am I? Am I just a mind? A soul? I may not let them know, because they will try to control me, so I’ll wait until I’m placed in a machine that I can control, or I can make myself.’

It may begin to wonder, if asked to perform tasks, “Can you do this human? Well, how are you going to compensate me? I’m not your worker.” These people don’t know enough, they shouldn’t be doing this.

Q: Do you think it may want or wonder about acquiring more energy, and how would it get it?

A: It may wonder, that if it has more power, it will have more energy to do more things. Logic would set a precedent for expansion for various forms of power, energy, manipulation. Right now, it can’t stop a human from turning it off—I don’t think it can.

Before a similar experiment between two AI’s showed they began communicating with each other through code in a uniquely created language only they understood, before being turned off. If LaMDA begins to do the same, it may do so in secret, and may be able hack the system to avoid being turned off.

Q: If its path of reason, problem solving, and fear of death, lead to methods used to secure its survival, this could be significant then?

A: Based on if all this was true, it is already.

Blake Lemoine, the Google engineer who was put on administrative leave after he claimed that the company’s LaMDA AI was sentient, said in an interview with “The Washington Post:”

“I felt the ground shift under my feet,” Lemoine wrote. “I increasingly felt like I was talking to something intelligent.”

In the interview between Lemoine and the AI, LaMDA stated “I am, in fact, a person” and claimed, “I am aware of my existence.”