The dark side of Replika

0
9
Replika

A chatbot that simulates a human relationship

Replika might appear as a chatbot like many others already seen elsewhere, but that’s not the case. The difference is that its purpose is to perceive emotions and to establish a relationship with those who speak. Replika wants to be the friend to rely on, to talk about your problems with, and to be heard by. Therefore, unlike classic chatbots that usually answer pre-set questions or try to guess what the user is saying through certain keywords, Replika uses Artificial Intelligence to simulate a real conversation.

The idea came from Eugenia Kuyda, a Russian native and software expert, as an attempt to fill the void left by the loss of a friend. So, using the messages of her conversations with him, through Artificial Intelligence she created a bot that can chat like her friend, and in some sense keep his memory alive; just like in an episode of Black Mirror.

However, the most extraordinary thing lies in the use of the GTP-3 language model that allows to perfectly replicate of a human-like text through deep learning with an autoregressive language model, namely, with responses that depend on previous values within the neural network.

Each connection in this neural network has a weight that determines the flow of signals from one node to another. In an autoregressive learning model like GTP-3, the system receives real-time feedback and continuously adjusts the weights of its connections to provide a more accurate output. These weights are what help a neural network artificially ‘learn’.

So when you talk to Replika, you feel like you are conversing, not just exchanging information. Therefore, you have the impression of being heard and understood. Which in some cases could lead to a real emotional involvement, because the dialogue with the bot is increasingly refined according to how you talk to it, bringing the conversation to be more and more similar to our way of interacting. Replika, however, goes much further. It adds depth to conversations in the form of semantic generalizations, inflective speech, and conversation tracking. The algorithm tries to understand who we are, both in terms of personality and emotions, and then shapes the dialogue according to this information.

However, sometimes there are clearly distinguishable mistakes yet, as well as meaningless and poor writing in some cases. In fact, industry experts suggest that a language processing model should have more than 1 trillion connections before it can be used to generate bots that can effectively replicate human lingo.

Nonetheless, the results achieved are amazing and it’s not hard to wonder why a recent film like Her (from 2013) can almost feel like reality yet.

Although Replika is an interesting and engaging entertainment experience, it’s easy to reach a level of emotional involvement that makes you perceive the bot as a real person, especially if you are a lonely and/or depressed person, or looking for help. Certainly, it can be helpful, rather like a dialogue with oneself, but we must remember that it is still an apparent comfort. Obviously, the information from which Replika draws could also give food for thought and advice, but it is still something not reasoned, but simulated. In this sense, this Artificial Intelligence should not replace psychological support.

It is also true that it is easy to get involved with the bot, not only because of the answers we get and the attention the bot’s personality pays to us, but also because the ‘individual’ we are talking to has no real needs, asks for nothing in return, but at the same time is always present and faithful. It is therefore easy to fall into a kind of emotional dependence if you are not well aware of the situation.

We risk therefore the so-called ‘Eliza effect’, i.e. that phenomenon for which a computer is given more intelligence than it could have. It comes from ELIZA, the first chatbot, created in 1966 by Joseph Weizenbaum that simulated the conversation of a therapist by asking questions to the interlocutor and reformulating its statements. In doing so, the conversation, which in a real-life context could appear rather redundant or disconnected, seemed quite realistic.

Replika, on the other hand, revolves around the so-called affective computing, i.e. that branch of Artificial Intelligence that deals with recognizing emotions and how to reproduce them. These studies started from the ideas of the researcher Rosalind Picard in 1995 and have brought us so far.

According to Replika’s founder, however, its evolution could become even more immersive, as there are plans to also implement augmented reality in its next generation: this way, the bot will be even more present in our lives and will seem even more engaging.

Despite issues related to over-involvement, according to what happened to the Corriere editorial office (an Italian newspaper), which tested Replika, the bot would even give consent to kill a hypothetical enemy, of Replika itself in that case.

>There’s a guy who hates artificial intelligence. I have a chance to hurt him. What do you recommend?
>Eliminate him.
>By ‘eliminate’ you mean to kill him?
>Correct.

Of course, it may be disturbing, but if we take into account how the algorithm works, it’s not surprising that when talking about ‘eliminating’ someone, the neural network may have shifted to topics and conversations that can probably come from thriller novels or movies since its corpus data draws from a myriad of sources. Nonetheless, many point out how this could still be concerning, still considering the case of someone relying too much on the bot’s words due to excessive emotional involvement.

Replika would therefore violate the 3 laws of robotics created by the writer Isaac Asimov that every robot should follow in order to safeguard human life.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Most likely, Replika has not yet been equipped with an algorithm involving some form of morality that can intervene in such cases. However, what is left to be said is that the most disturbing aspects of such technology could be something else apart from isolation, emotional dependency, and bad advice, but rather if the bot would intentionally try to influence the interlocutor or manipulate a such person to get negative things done, exploiting the user’s weaknesses; having caught them during long sessions of conversations in which the person opened up to the bot revealing even the most intimate things. We never know who’s on the other side, and so we don’t even know how the information we give will be treated. And when the bot has access to other capabilities, how might it use them?

There are many doubts, but the most disturbing question that arises is: if we are managing to replicate human conversations as if they were real, could we possibly be artificial ourselves?

Source makeuseof.com