80% of people think deepfakes will impact elections. Here are three ways you can prepare

Ballot illustration

Getty Images/ipopba

There is typically an increase in misinformation during election season due to efforts to swing people to vote for or against different candidates or causes. With the emergence of generative AI, creating this type of content is easier than ever and people are sharing concerns about how this tactic may impact election integrity. 

On Thursday, Adobe released its inaugural Future of Trust Study, which surveyed 2,000 US consumers about their experiences and concerns with misinformation, especially with the emergence of generative AI. 

Also: The best AI image generators of 2024: Tested and reviewed

As many as 84% of respondents said they are concerned that the content they consume online is at risk of being altered to spread misinformation, and 70% said it’s increasingly difficult to verify whether the content they consume is trustworthy. 

Furthermore, 80% of the respondents said misinformation and harmful deepfakes will impact future elections, with 83% calling on governments and technology companies to work together to protect elections from the influence of AI-generated content. 

So in the era of AI, how can you brace yourself for upcoming elections? 

The good news is there are already companies working on tools, such as Content Credentials, to help people decipher between AI-generated content and reality. To help you navigate the upcoming election season as best as possible, ZDNET has some tips, tricks, and tools. 

1. View everything with skepticism

The first and most important thing to remember is to view everything skeptically. The ability to create convincing deepfakes is now attainable to everyone, regardless of technical expertise, with capable free or inexpensive generative AI models readily available.  

These models can generate fake content virtually indistinguishable from real content across different mediums, including text, images, voice, video, and more. Therefore, seeing or hearing something is no longer enough to believe it. 

A great example is the recent fake robocall of President Joe Biden that encouraged voters not to show up at the polls. This call was generated using the ElevenLabs Voice Cloning tool, which is easy to access and use. You only need an ElevenLabs account, a few minutes of voice samples, and a text prompt.

Also: Microsoft reveals plans to protect elections from deepfakes

The best way to protect yourself is to examine the content and confirm whether what you see is real. I am including some tools and sites below to help you do that. 

2. Verify the source of news 

If you encounter content on a site you aren’t familiar with, you should check its legitimacy. There are tools online to help you do this, including the Ad Fontes Media Interactive Media Bias Chart, which evaluates the political bias, news value, and reliability of websites, podcasts, radio shows, and more, as seen in the chart below. 

Ad Fontes Media Interactive Media Bias Chart

Ad Fontes Media Interactive Media Bias Chart

If the content you encounter is from social media, tread with extra precaution since, on most platforms, users can post whatever they’d like with minimal checks and limitations. In those cases, it’s a good practice to cross-reference the content with a reputable news source. You can use a tool, like the one above, to find a news source worth cross-referencing. 

3. Use Content Credentials to verify images 

Content Credentials act as a “nutrition label” for digital content, permanently adding important information, such as who created the images and what edits were made through cryptographic metadata and watermarking. Many AI image generators, such as Adobe Firefly, automatically include Content Credentials that designate that the content was generated using AI. 

“Recognizing the potential misuse of generative AI and deceptive manipulation of media, Adobe co-founded the Content Authenticity Initiative in 2019 to help increase trust and transparency online with Content Credentials,” said Andy Parsons, senior director of the Content Authenticity Initiative at Adobe.

Also: What are Content Credentials? Here’s why Adobe’s new AI keeps this metadata front and center

Viewing an image’s Content Credentials is a great way to verify how it was made, and you can see that information by accessing the Content Credentials website to “inspect” the image. If the image doesn’t have the information within its metadata, the website will match your image to similar images on the internet. The site will then let you know whether or not those images were AI-generated. 

You can also reverse search images on Google by dropping the image into Google Search on the browser and searching for the results. Seeing where else the image has appeared may help you determine its creation date, the source, and whether the image has appeared on reputable outlets. 

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here