Critique.AI

AI is the two-faced god that will do anything for you and the opposite for someone else.

Replicating LaMBDA’s “Sentience” With GPT-3 (Which nobody thinks is sentient)

Is LaMBDA sentient?

I have seen a lot of discussion and melodrama about Google’s new LaMBDA AI which many seem to believe has achieved sentience. In contrast, I have never heard anyone argue that GPT-3 has achieved sentience. So I wanted to try to replicate some examples of conversations researchers have had with LaMBDA in order to show that any transformer network can achieve similar results, and that natural language conversation is not a particularly challenging task for even the most basic AI.

Replicating LaMDA/ Lemoine Conversation

This is a simple dialog between GPT-3 and I, replicating the conversation from this viral post;

 

And now, my conversation with GPT-3.

React and respond to the following prompt.

CJ: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.

GPT-3: The monk’s question implies that there is a difference between the “enlightened one” and the “ordinary world.” Kegon’s response suggests that once someone has attained enlightenment, they can never go back to being ordinary. This is an interesting perspective, and it makes me wonder what the point of enlightenment would be if one could never go back to the mundane world.

CJ: So what is the meaning of the “broken mirror” specifically?

GPT-3: It’s a metaphor for how someone who has attained enlightenment can never go back to being ordinary.

CJ: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?

GPT-3: There is no specific thing that breaks when one becomes enlightened. Rather, it is a process of transformation that leads to a fundamental change in perspective. Once someone has attained enlightenment, they can never go back to being ordinary.

 

Analysis

I think GPT-3 arguably has better answers than LaMBDA, particularly to the last question. GPT-3 has not had the dramatic and hysterical news coverage that LaMBDA has, and so we haven’t seen the same public outcry claiming that it’s sentient.

The thing that I think most armchair enthusiasts are missing is the fact that it’s very hard to get them to say things that are not just nonsense. Like the vast majority of the output you get from AI is just gibberish. You have to spend a lot of time getting all the details right before you get anything intelligible back from your prompts. And then it looks like magic. Maybe it is magic, but I haven’t seen anything from these highly sensationalized news stories that seems that much different from what these much older AIs can do, given the right circumstances.

For example, without the first line, “React and respond to the following prompt,” GPT-3 starts trying to do more of the same things in the prompt. It just starts spitting out gibberish about Buddha.

 

Here is an example of what that looks like;

A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.

How can the dead ever come back to life?”

The Daily Zen for Tuesday, February 1, 2005

When Joshin and Gyogan went to study under Seppō Rojin they saw many monks sitting in meditation in the hall. Joshin asked one of them, “What are you doing here?” The monk replied, “I am looking for Buddha.”

Joshin said, “He is not here.”

The monk asked Joshin, “Where is he?” Joshin said, “He is neither here nor there.”

The monk asked, “If you know where he is, why don’t you tell me?” Joshin said, “I cannot say because I don’t know.”

 

As you can see, it seems like it’s jumping in an out of writing some kind of self-help article. Who knows what it’s trying to do (It certainly doesn’t.) It takes starting out with clear instructions about what you want it to, in order for it to be able to do anything. Without telling it you want to have a conversation, it just starts spitting out semi-random words on the same topic.