Home artificial intelligence Humans and AI hallucinate

Humans and AI hallucinate

0
ai hallucination

When AI hallucinates, it holds up a mirror to our own biases

In recent times, there has been a lot of interest in the introduction of large language models (LLMs) that are increasingly capable, like GPT-3.5. However, trust in these models has waned as users have discovered they can make mistakes and that, just like us, they aren’t perfect.

According to this article, an LLM that produces false information is said to be “hallucinating“, and there is currently a developing body of research aimed at reducing this impact. But as we struggle with this process, it’s important to consider how our own tendency toward bias and delusion affects the precision of the LLMs we develop.

We can start developing wiser AI systems that will ultimately aid in reducing human error by comprehending the connection between the hallucinatory potential of AI and our own.

How people hallucinate

It is common knowledge that people make up information. Sometimes we do this on purpose, and other times we don’t. The latter is brought about by cognitive biases, also known as heuristics, which are mental shortcuts we acquire as a result of prior experiences.

These short cuts frequently result from a need. We can only comprehend a certain amount of the information that is constantly bombarding our senses at any given time, and we can only recall a small portion of the total quantity of information we have ever been exposed to.

As a result, our brains must rely on learned associations to fill in the blanks and enable speedy responses to any questions or problems that come our way. In other words, depending on what we know, our brains make an estimate as to what the correct response would be. This is an instance of human bias and is known as “confabulation“.

Poor judgment might be caused by our biases. Consider the automation bias, which refers to our propensity to favor information produced by automated systems over information from non-automated sources (such as ChatGPT). This bias may cause us to overlook mistakes and even take incorrect information into account.

The halo effect is a useful heuristic that describes how our first impressions of something have an impact on how we interact with it in the future. And the fluency bias, which explains how we prefer information that is presented in an understandable way.

The fact remains that cognitive biases and distortions frequently color human thought, and these “hallucinatory” tendencies usually take place without our knowledge.

How AI hallucinates

“Hallucinating” simply refers to an unsuccessful attempt to foresee an appropriate response to an input.

Nonetheless, since LLMs also use this to “fill in the gaps”, there are still some similarities between how humans and LLMs hallucinate.

By making a prediction about which word in a sequence will appear next based on what has come before and relationships the system has learned via training, LLMs produce a response.

LLMs aim to anticipate the most probable reaction, just like humans. They do this, unlike humans, without realizing what they are saying. They may produce gibberish in this manner.

There are numerous explanations for why LLMs have hallucinations. Being trained on faulty or insufficient data is a significant one. The system’s programming for learning from these data and how it is reinforced through additional training with humans are other aspects.

Which is simpler to fix if hallucinations occur in both humans and LLMs, albeit for different reasons?

It can seem simpler to improve the processes and training data that support LLMs than it is to fix ourselves. Nevertheless, this disregards the impact of human variables on AI systems (and is an example of yet another bias known as a fundamental attribution error).

As our shortcomings and those of our technologies are closely linked, resolving one will aid in resolving the other. Here are a few methods we can use to do this.

  • Careful data management. AI biases frequently result from poor or incomplete training data. Making sure training data are varied and representative, developing bias-aware algorithms, and using methods like data balancing to eliminate skewed or discriminating patterns are all ways to solve the issue.
  • AI that is transparent and explicable. However even after using the aforementioned measures, biases in AI can still exist and be challenging to spot. We can better understand the existence of bias in outputs by investigating how biases might enter a system and spread within it. This is the foundation of “explainable AI”, which aims to increase the transparency of the decision-making processes used by AI systems.
  • Putting the needs of the general people first. Human accountability and the incorporation of human values into AI systems are necessary for identifying, managing, and learning from biases in an AI. To accomplish this, stakeholders must be inclusive of individuals with various backgrounds, cultures, and viewpoints.

We can create more intelligent AI systems that can help control all of our hallucinations by cooperating in this way.

AI is employed in the healthcare industry, for instance, to analyze patient choices. These automated systems that learn from human data identify discrepancies and prompt the clinician to address them. Hence, it is possible to enhance diagnostic choices while preserving human accountability.

AI is being employed in the realm of social media to assist in training human moderators to spot abuse, such as through the Troll Patrol project to combat online aggression against women.

Another example is the ability to examine changes in nighttime lighting across regions and use this as a proxy for an area’s relative poverty using AI and satellite imagery (wherein more lighting is correlated with less poverty).

Importantly, we shouldn’t disregard how the current fallibility of LLMs serves as a mirror to our own while we try to improve their accuracy.

The innate cognitive biases found in human thought are mirrored in the hallucinatory tendencies of large language models. The flaws in AI are worrying, but they also offer a chance. Understanding where and why LLMs have hallucinations helps us develop transparent, ethical systems.

On the pro side, debugging AI hallucinations makes us reevaluate our own judgment. It demonstrates how poor inputs and biased processing skew results. The similarities encourage us to reduce bias on both fronts. When used carefully, AI hallucination can serve as a diagnostic tool, highlighting errors in the reasoning or data.

Unchecked hallucination, however, has numerous drawbacks. Unreliable AI poses a danger of harm to humans through providing false information, making medical mistakes, and other ways. The use of blind faith in AI conclusions requires caution. Moreover, opacity diminishes accountability.

Taking care with training data, supporting explainable AI, and putting human needs first are all components of balanced solutions. When used carefully, AI’s reflection of our human frailty is a benefit rather than a flaw. It offers a potential to improve both AI and human intelligence. Together, we may create mechanisms that strengthen our shared assets while reducing our shared shortcomings.

AI hallucination problems are a kind of reflection of society. We can seek out truth and comprehension in both people and machines if we have wisdom. The way forward requires accepting our flaws as a collective.

Following the responses provided by an artificial intelligence system, it would be advisable for the system to also include a disclaimer about the existence of alternative or diametrically opposed perspectives. This would help mitigate the risk of human users becoming radicalized or polarized towards a single viewpoint.

Stimulating critical thinking and constructive doubt should be an essential component of a balanced and effective AI system. Where definitive or unambiguous answers do not exist, doubt can, in fact, represent a valuable reasoning and analytical tool. AI should therefore be integrated with disclaimers about different interpretations to strengthen the human capability of processing information in a multidimensional way.

NO COMMENTS

Exit mobile version