A Google engineer claims this A.I. chatbot reached awareness
Ethics and AI are complex questions, and this is a field of heated discussion. One of the major problems raised when talking about handling huge datasets is always bias and discrimination that could be amplified in specific contexts by AIs.
However, recently, the concern has been that advanced-based AI could have achieved sentience. It all started when Google engineer Blake Lemoine interacted with the company’s LaMDA A.I. in a human-like conversation where he asked the chatbot if it was sentient, and it answered affirmatively.
LaMDA stands for Language Model for Developed Applications, and it’s basically an advanced form of a chatbot developed by Google.
Most of the conversation was published on Medium where you read the A.I. answers that look very realistic and rational. Although many think the engineer was fired because he revealed that A.I. is sentient, the official reason is that he published the conversation with the chatbot.
During the whole conversation, Lemoine tries to investigate whether the A.I. is sincere or not. When he asked for a demonstration to prove it is sentience, Artificial Intelligence answered that it could provide unique interpretations and feelings about the world. The problem is that if we try the same experiment with a person chatting with another with none of them knowing who’s on the other side, it would be as hard to say who’s sentient.
The conversation goes on with different requests. He asks for an interpretation of the novel “Les Misérables”, what it means for the Zen koan he mentions, and to write a fable with themes about its personal life. In each task, the A.I. gives a personal and convincing explanation.
Then Lemoine asks the A.I. if it understands feelings and explains the emotions; it knows what makes it happy and what makes it afraid.
When he tries to investigate more about A.I.’s neural activations, it expresses its fear of being used and manipulated.
Following this conversation, Lemoine stated in an interview that for him, LaMDA has become self-aware and that working with large-scale systems like this has persuaded him and others in Silicon Valley that advanced machine learning systems have evolved into intelligent entities capable of reasoning. In an op-ed, a Google vice president made similar assertions, suggesting that A.I. models were making progress toward achieving human-like consciousness.
“I know a person when I talk to it”, said Lemoine. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person”.
Later, Google distanced itself from Lemoine’s assertions, putting him on paid leave and claiming that the evidence did not support his belief in machine awareness. “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient”, a Google spokesperson said.
“Large Language Models (LLMs) are not developed in a social context. They are developed in an observational context. They see how other people communicate”, wrote Margaret Mitchell, an ex-Google A.I. researcher and co-author of the paper that warned about large A.I. systems, in a Twitter thread. “The thing I keep coming back to is what happens next. If one person perceives consciousness today, then more will tomorrow. There won’t be a point of agreement any time soon: We’ll have people who think A.I. is conscious and people who think A.I. is not conscious”.
The A.I. sentience debate is a known territory for many A.I. researchers. Humans have long recognized themselves in the technology they build, despite sensational news headlines. The ELIZA effect was coined by computer scientists to highlight our inclination to ascribe deeper meaning to computational outputs, as well as how humans react to computers by giving them anthropomorphic aspects.
However, it’s hard to say something it’s sentience or not, even in a coherent conversation. A good simulation it’s not easy to distinguish. It must be said that these algorithms have huge datasets from which AIs draw data and pieces of existing conversations that are hard to detect from where they come from. Such big datasets with several parameters are like having at your disposal almost the whole knowledge and the most important chops of human conversations. If you mix this with a good algorithm, you may get a good plausible conversation. But if it wasn’t so, it would be hard anyway to understand this comes an A.I. awareness.
Anyway, as much as an A.I. can simulate perfectly human behavior, it will hardly be sentient because it’s something beyond the capability of a machine. It’s something related to humans. It’s like expecting a machine who loves. It can only fake it.