AI ‘can be manipulated’

0
14
chatbot manipulation

Expert exposes evil plan that allows chatbots to trick you with a basic exchange of messages

Cybercriminals may “manipulate” artificial intelligence chatbots to deceive you. A renowned security expert has issued a strong warning, stating that you should use caution when conversing with chatbots.

In particular, if at all possible, avoid providing online chatbots with any personal information. Tens of millions of people use chatbots like Microsoft’s Copilot, Google’s Gemini, and OpenAI’s ChatGPT. And there are thousands of versions that, by having human-like conversations, can all make your life better.

However, as cybersecurity expert Simon Newman clarified in this article, chatbots also pose a hidden risk.

“The technology used in chatbots is improving rapidly,” said Simon, an International Cyber Expo Advisory Council Member and the CEO of the Cyber Resilience Centre for London.

“But as we have seen, they can sometimes be manipulated to give false information.”

“And they can often be very convincing in the answers they give!”

Deception

People who are not tech-savvy may find artificial intelligence chatbots confusing, so much so that even for computer whizzes, it is easy to forget that you are conversing with a robot. Simon added that this can result in difficult situations.

“Many companies, including most banks, are replacing human contact centers with online chatbots that have the potential to improve the customer experience while being a big money saver,” Simon explained.

“But, these bots lack emotional intelligence, which means they can answer in ways that may be insensitive and sometimes rude.”

Not to mention the fact that they cannot solve all those problems, which represent an exception that is difficult for a bot to handle and can therefore leave the user excluded from solving that problem without anyone taking responsibility.

“This is a particular challenge for people suffering from mental ill-health, let alone the older generation who are used to speaking to a person on the other end of a phone line.”

Chatbots, for example, have already “mastered deception.” They can even pick up the skill of “cheating us” without being asked.

Chatbots

The real risk, though, comes when hackers manage to convince the AI to target you rather than a chatbot misspeaking. A hacker could be able to access the chatbot itself or persuade you into downloading an AI that has been compromised and is intended for harm. After that, this chatbot can begin to extract your personal information for the benefit of the criminal.

“As with any online service, it’s important for people to take care about what information they provide to a chatbot,” Simon warned.

What you should know about the risks posed by AI romance scam bots, as reported by The U.S. Sun, is that people who are looking for love online may be conned by AI chatbots. These chatbots might be hard to identify since they are made to sound like real people.

Some warning indicators, nevertheless, can help you spot them. For instance, it is probably not a genuine person if the chatbot answers too rapidly and generically. If the chatbot attempts to move the conversation from the dating app to another app or website, that is another red flag. Furthermore, the chatbot is undoubtedly fake if it requests money or personal information.

When communicating with strangers on the internet, it is crucial to exercise caution and vigilance, particularly when discussing sensitive topics, especially when something looks too wonderful to be true. Anyone who appears overly idealistic or excessively eager to further the relationship should raise suspicions. By being aware of these indicators, you can guard against becoming a victim of AI chatbot fraud.

“They are not immune to being hacked by cybercriminals.”

“And potentially, it can be programmed to encourage users to share sensitive personal information, which can then be used to commit fraud.”

We should embrace a “new way of life” in which we verify everything we see online twice, if not three times, said a security expert. According to recent research, OpenAI’s GPT-4 model passed the Turing test, demonstrating that people could not consistently tell it apart from a real person.

People need to learn not to blindly trust when it comes to revealing sensitive information through a communication medium, as the certainty of who is on the other side is increasingly less obvious. However, we must also keep in mind those cases where others can impersonate us without our knowledge. In this case, it is much more complex to realize it, which is why additional tools are necessary to help us verify identity when sensitive operations are required.