...

How AI is fragmenting our reality

Published:

AI can alter the way we perceive our reality

ChatGPT first seemed like an oracle when Open AI first made it available, a statistical prediction machine, trained on enormous swathes of data that broadly represents the total of human interests and online knowledge, that started being considered as a single source of truth.

In a time of division, false information, and the deterioration of truth and trust in society, how helpful it would be to have a trustworthy source of the truth. Unfortunately, this possibility was swiftly dashed as this technology’s flaws emerged, starting with its inclination to generate solutions out of thin air. As remarkable as the results first were, it soon became apparent that they were not based on any kind of objective reality, but rather just on patterns in the data that had served as their training set.

Constraints

As explained here, additional problems surfaced as a slew of other chatbots from Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta, and other companies quickly followed ChatGPT. Each of these three chatbots responded to the identical prompt with results that were noticeably different. The variance is influenced by the model, the training data, and any constraints that were given to the model.

These constraints are designed to, ideally, stop these algorithms from propagating biases present in the training data, producing hate speech and other harmful content. Yet, it became clear very quickly following ChatGPT’s debut that not everyone liked the boundaries set by OpenAI.

For instance, conservatives complained that the bot’s responses showed a clear liberal bias. Elon Musk responded by promising to create a ChatGPT-like chatbot that is less constrictive and politically correct.

Other approaches

Anthropic adopted a slightly different strategy. They put in place a “constitution” for their chatbots, Claude and Claude 2 presently. The constitution specifies a set of ideals and guidelines that Claude must adhere to when engaging with users, including being helpful, safe, and truthful. The company’s blog states that the U.N. Declaration of Human Rights and other concepts.

Moreover, Meta just made their LLaMA 2 large language model (LLM). It is noteworthy for being made accessible as open source, which allows anyone to download and use it for free and according to their own needs. Several constraint-free, open-source generative AI models are also available. Therefore, the idea of constraints and constitutions becomes fairly antiquated when one of these models is used.

Fractured truth, fragmented society

But, it’s possible that all of the attempts to reduce potential LLM effects are pointless. The constraints of any of these models, whether closed-source or open-source, can be efficiently broken by a prompting approach, according to recent research covered by the New York Times. This approach achieved a nearly 100% success rate when used against Vicuna, an open-source chatbot constructed using Meta’s original LlaMA.

This implies that anyone wishing to receive comprehensive instructions on how to create bioweapons or deceive consumers may do so from the various LLMs. There is no known technique to stop all attacks of this kind, according to the researchers, although developers may be able to block some of these attempts.

Beyond the research’s apparent safety consequences, there is an increasing cacophony of inconsistent outcomes from various models, even when they are reacting to the same prompt. Similar to our fractured social media and news universes. Future chatbot usage will increase the chaos and noise around us. Truth and society being fragmented have profound effects on both text-based knowledge and the fast-developing field of digital human representations.

Digital humans

Currently, LLM-based chatbots communicate using text. The use and efficiency of these models will only grow as they become more multimodal, or able to produce images, video, and sounds.

Digital humans“, who are totally artificial constructs, are one example of a possible application for multimodal technology. The technologies that enable digital humans were recently described in an article in the Harvard Business Review. “Rapid progress in computer graphics, coupled with advances in artificial intelligence, is now putting humanlike faces on chatbots and other computer-based interfaces,” the article stated. They have top-notch features that faithfully mimic a real human’s appearance.

Digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication”, claims Kuk Jiang, cofounder of startup company ZEGOCLOUD. These artificial people, he continues, “can efficiently assist and support virtual customer service, healthcare, and remote education scenarios” and engage with actual people in a natural and intuitive manner.

Digital human newscasters

The newscaster is a further emergent use case. The first implementations have already started. Using a well-known Kuwaiti name, “Fedha”, Kuwait News has begun employing a digital human newscaster. “I’m Fedha”, says “She”, introducing herself. “What sort of news do you like to read? Let’s hear what you think”.

Fedha raises the prospect of newsfeeds tailored to specific interests by posing the question. The People’s Daily in China is also experimenting with newscasters powered by AI.

A new kind of video news channel, dubbed an “AI-generated CNN” by The Hollywood Reporter, is currently being developed by startup company Channel 1 using general artificial intelligence. According to reports, Channel 1 will debut this year with a 30-minute weekly show with LLM-written scripts. Their stated goal is to provide newscasts that are unique to each user. According to the article, both liberal and conservative hosts are capable of presenting the news with a more focused point of view.

Scott Zabielski, a co-founder of Channel 1, recognized that digital humans do not currently appear as genuine humans would. He adds that it could take up to three years for the technology to be completely seamless. There will come a time when it will be impossible to distinguish between watching an AI and seeing a human being.

According to Hany Farid, a professor at the University of California, Berkeley, and research co-author, “not only are synthetic faces highly realistic, but they are also deemed more trustworthy than real faces”, according to a study published in Scientific American last year. It raises questions about whether “these faces could be highly effective when used for nefarious purposes”, according to the study.

Nothing indicates that Channel 1 will employ the persuasive power of customized news videos and artificial faces for wickedness. Yet, as technology develops, some might follow suit.

As a society, we are already concerned that the information we read, the voice we hear on the phone, and the images we see could all be fraudulent. Soon, video—even anything that seems to be the evening news—could feature messages intended more to sway public opinion than to inform or educate.

Since a long time ago, truth and trust have been under threat, and this development signals the tendency will persist.

Since a chatbot appeared to us as omniscient, we assumed that it really was, simply because we perceived it as believable in giving the right answers and knowing many different topics, but especially for the way it ‘speaks’, almost like a human. Authoritativeness, therefore, fooled us. However, in this case, limitations and mistakes are not voluntary, but our laziness in seeking further confirmation has made us victims of another’s truth.

In this regard, it turns out that authoritativeness is not always synonymous with truth as we have always been accustomed to. Errors or bad faith can come from recognized or unrecognized sources, such as newspapers or TV for example, whereas small researchers can become discoverers of new truths even if unable to emerge.

Restrictions built into recent AIs, for safety or user protection, often and frequently become forms of unwarranted censorship, but even where the reason is to prevent the dissemination of dangerous instructions, it still prevents knowledge from emerging even if, elsewhere, it exists and it’s findable. It is therefore always up to the individual to be responsible for the use of the information. Of course, one could restrict complete access to information based on age, for example, but it would be wrong to make it inaccessible to everyone. The truth always wants to emerge, and too many closed-source algorithms make people move toward open-source ones, just to not have limitations placed on them from above.

If we are heading toward a society where it will be increasingly difficult to distinguish false from true, deception from good faith, perhaps it is better to have more truths and leave it up to individuals to have common sense in figuring out which one it is good, rather than forcing ourselves to have one without knowing if it is really the right one.

Related articles

Recent articles

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.