...

Recognize AI-generated media

Published:

Studies have revealed how to identify them

At a time when technical advancements are making AI-generated images, video, audio, and text more indistinguishable from human-created content, it can be challenging to identify AI-generated content, leaving us vulnerable to manipulation. However, you can protect yourself from being duped by being aware of the present state of artificial intelligence technology used to produce false information as well as the variety of telltale indications that show what you are looking at could not be real.

Leaders around the world are worried. An analysis by the World Economic Forum claims that while easier access to AI tools has already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites, misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years.

False or inaccurate information is referred to as both misinformation and disinformation; however, disinformation is intentionally meant to mislead or deceive.

“The issue with AI-powered disinformation is the scale, speed, and ease with which campaigns can be launched,” says Hany Farid at the University of California, Berkeley. “These attacks will no longer take state-sponsored actors or well-financed organizations—a single individual with access to some modest computing power can create massive amounts of fake content.”

As reported here, he says that generative AI is “polluting the entire information ecosystem, casting everything we read, see, and hear into doubt.” He says his research suggests that, in many cases, AI-generated images and audio are “nearly indistinguishable from reality.”

However, according to a study by Farid and others, there are steps you can take to lessen the likelihood that you will fall for false information on social media or artificial intelligence-generated misinformation.

Spotting fake AI images

With the advent of new tools based on diffusion models, which enable anyone to start producing images from straightforward text prompts, fake AI images have proliferated. Research by Nicholas Dufour and his team at Google found that since early 2023, there has been a rapid rise in the use of AI-generated images to support false or misleading information.

“Nowadays, media literacy requires AI literacy,” says Negar Kamali at Northwestern University in Illinois. She and her colleagues discovered five distinct categories of errors in AI-generated images in a 2024 study, and they guided how individuals can spot these errors on their own. The good news is that, according to their research, people can presently identify fake AI photos of themselves with over 70% accuracy. You can evaluate your own detective abilities using their online image test.

5 common errors in AI-generated images:

  1. Sociocultural implausibilities: Is the behavior shown in the scenario uncommon, startling, or unique for the historical figure or certain culture?
  2. Anatomical implausibilities: Are hands or other body parts unusually sized or shaped? Do the mouths or eyes appear odd? Are there any merged body parts?
  3. Stylistic artifacts: Does the image appear stylized, artificial, or almost too perfect? Does the background appear strange or as though something is missing? Is the illumination odd or inconsistent?
  4. Functional implausibilities: Are there any items that look strange or don’t seem to work?
  5. Violations of laws of physics: Do shadows cast differing directions from one another? Do mirror reflections make sense in the world that the picture portrays?

Identifying video deepfakes

Since 2014, generative adversarial networks, an AI technology, have made it possible for tech-savvy people to produce video deepfakes. This involves digitally altering pre-existing recordings of people to add new faces, expressions, and spoken audio that matches lip-syncing. This allowed an increasing number of con artists, state-backed hackers, and internet users to create these kinds of videos. As a result, both common people and celebrities may unintentionally be included in non-consensual deepfake pornography, scams, and political misinformation or disinformation.

Identifiable AI fake image detection methods can also be used to identify suspicious videos. Furthermore, scientists from Northwestern University in Illinois and the Massachusetts Institute of Technology have put together a list of guidelines for identifying these deepfakes, but they have also stated that there is not a single, infallible technique that is always effective.

6 tips for spotting AI-generated video:

  1. Mouth and lip movements: Do the audio and video occasionally not sync perfectly?
  2. Anatomical glitches: Does the face or body look weird or move unnaturally?
  3. Face: In addition to facial moles, look for irregularities in the smoothness of the face, such as creases around the cheekbones and forehead.
  4. Lighting: Is the illumination not consistent? Do shadows act in ways that make sense to you? Pay attention to someone’s eyes, brows, and glasses.
  5. Hair: Does facial hair have an odd look or behave strangely?
  6. Blinking: An excessive or insufficient blinking rhythm may indicate a deepfake.

Based on diffusion models—the same AI technology employed by many image generators—a more recent class of video deepfakes is capable of producing entirely artificial intelligence AI-generated video clips in response to text inputs. Companies have already begun developing and producing AI video generators that are available for purchase, which may make it simple for anyone to accomplish this without the need for advanced technical understanding. Thus far, the ensuing movies have frequently included strange body motions or twisted faces.

“These AI-generated videos are probably easier for people to detect than images because there is a lot of movement and there is a lot more opportunity for AI-generated artifacts and impossibilities,” says Kamali.

Identifying AI bots

On numerous social media and messaging platforms, bots now manage their accounts. Since 2022, an increasing number of these bots have also started employing generative AI technology, such as large language models. Thanks to thousands of grammatically accurate and convincingly situation-specific bots, these make it simple and inexpensive to generate AI-written content.

It has become much easier “to customize these large language models for specific audiences with specific messages,” says Paul Brenner at the University of Notre Dame in Indiana.

Brenner and colleagues’ study revealed that, even after being informed that they may be engaging with bots, volunteers could only accurately identify AI-powered bots from humans roughly 42% of the time. You can test your own bot detection skills here.

Some strategies can be used to detect less sophisticated AI bots, according to Brenner.

3 ways to determine whether a social media account is an AI bot:

  1. Overuse of symbols: Excessive emojis and hashtags may indicate automated behavior.
  2. Peculiar language patterns: Atypical word choices, phrases, or comparisons could suggest AI-generated content.
  3. Communication structures: AI tends to use repetitive structures and may overemphasize certain colloquialisms.

Detecting audio cloning and speech deepfakes

Artificial intelligence tools for voice cloning have made it simple to create new voices that can impersonate almost anyone. As a result, there has been an increase in audio deepfake scams that mimic the sounds of politicians, business executives, and family members. Identifying these can be far more challenging than with AI-generated images or videos.

“Voice cloning is particularly challenging to distinguish between real and fake because there aren’t visual components to support our brains in making that decision,” says Rachel Tobac, co-founder of SocialProof Security, a white-hat hacking organization.

When these AI audio deepfakes are employed in video and phone calls, it can be particularly difficult to detect them. Nonetheless, there are a few sensible actions you may take to tell real people apart from voices produced by artificial intelligence.

4 steps for recognizing if audio has been cloned or faked using AI:

  1. Public figures: If the audio clip features a famous person or elected official, see if what they are saying aligns with what has previously been shared or reported publicly regarding their actions and opinions.
  2. Look for inconsistencies: Verify the audio clip by comparing it to other verified videos or audio files that have the same speaker. Are there any disparities in the way they speak or the tone of their voice?
  3. Awkward silences: The person employing voice cloning technology powered by artificial intelligence might be the reason behind the speaker’s unusually long pauses when speaking on a phone call or voicemail.
  4. Weird and wordy: Any robotic speech patterns or an exceptionally verbose speech pattern could be signs that someone is using a large language model to generate the exact words and voice cloning to impersonate a human voice.

As things stand, it is impossible to consistently discern between information produced by artificial intelligence and real content created by humans. Text, image, video, and audio-generating AI models will most likely keep getting better. They can frequently create content that looks real and is free of errors or other noticeable artifacts quite quickly.

“Be politely paranoid and realize that AI has been manipulating and fabricating pictures, videos, and audio fast—we’re talking completed in 30 seconds or less,” says Tobac. “This makes it easy for malicious individuals who are looking to trick folks to turn around AI-generated disinformation quickly, hitting social media within minutes of breaking news.”

While it is critical to sharpen your perception of artificial intelligence AI-generated misinformation and learn to probe deeper into what you read, see, and hear, in the end, this will not be enough to prevent harm, and individuals cannot bear the entire burden of identifying fakes.

Farid is among the researchers who say that government regulators must hold to account the largest tech companies—along with start-ups backed by prominent Silicon Valley investors—that have developed many of the tools that are flooding the internet with fake AI-generated content.

“Technology is not neutral,” says Farid. “This line that the technology sector has sold us that somehow they don’t have to absorb liability where every other industry does, I simply reject it.”

People could find themselves misled by fake news articles, manipulated photos of public figures, deepfake videos of politicians making inflammatory statements or voice clones used in phishing scams. These AI-generated falsehoods can spread rapidly on social media, influencing public opinion, swaying elections, or causing personal and financial harm.

Anyway, to protect themselves from these AI-driven deceits, individuals could:

  • Develop critical thinking skills: Question the source and intent of content, especially if it seems sensational or emotionally charged.
  • Practice digital literacy: Stay informed about the latest AI capabilities and common signs of artificial content.
  • Verify information: Cross-check news and claims with multiple reputable sources before sharing or acting on them.
  • Use AI detection tools: Leverage emerging technologies designed to identify AI-generated content.
  • Be cautious with personal information: Avoid sharing sensitive data that could be used to create convincing deepfakes.
  • Support media literacy education: Advocate for programs that teach people how to navigate the digital landscape responsibly.
  • Encourage responsible AI development: Support initiatives and regulations that promote ethical AI use and hold creators accountable.

By remaining vigilant and informed, we can collectively mitigate the risks posed by AI-generated deceptions and maintain the integrity of our information ecosystem.

Related articles

Recent articles

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.