The voter's guide to AI-generated election misinformation

A humanoid robot standing in front of a newscast podium.
(Image credit: Shutterstock)

We live in a time when AI-driven tech is starting to take shape in a real, tangible way and our human cognitive faculties may come in clutch in many ways we don’t even immediately realize. 

Multiple outlets and digital experts have put forth concerns about the upcoming 2024 US election (a traditionally very human affair) and the perpetual surge of information - and misinformation - driven by generative AI. We’ve seen recent elections in many countries happen in tandem with the formation of rapidly-growing pockets of users on social media platforms where misinformation can spread like wildfire.

These groups rapidly share information from dubious sources and questionable figures, false or incorrectly contextualized information from foreign agents or organizations, and misinformation from straight-up bogus news sites. In not-so-distant memory, we’ve witnessed the proliferation of conspiracy theories and efforts to discredit the outcomes of elections based on claims that have been proven false.

The upcoming 2024 US presidential race looks like it will be joining the series in this respect with the ease of content generation in our present AI-aided content era. 

The misinformation sensation

Experts in the field have made statements stating as much; AI-generated content that looks and sounds human is already saturating all kinds of content spaces. This adds to the work it takes to sort through and curate the sheer amount of information and data online, further depending on how much or how little reading and understanding a user is willing to do in the first place.

Such a sentiment is expressed by Ben Winters, senior counsel at the Electronic Privacy Information Center, a non-profit privacy research organization. “It will have no positive effects on the information ecosystem,” he says, and that this will continue to lower users’ trust in content they find online.

Manipulated images and other specifically-formulated media aren’t a new phenomenon - photoshopped pictures, impersonating emails, and robocalls are commonly found in our everyday lives. One huge issue with these - and other novel forms of misinformation - is how much easier it’s become to make such content.

A mobile device sat on a laptop keyboard with the ChatGPT blog announcement open in a browser window.

ChatGPT has become incredibly easy to access - and abuse. (Image credit: Shutterstock / Tada Images)

The ease of lying

Not only that, but it’s also become easier to target both specific groups and even specific individuals thanks to AI. With the right tools, it’s now possible to generate highly-tailored content much more efficiently.

If you’ve been following the stories of the development and public debut of AI tools like those developed by OpenAI, you already know that AI-assisted software can create audio based on pre-existing voice input, put together fairly convincing text in all types of tones and styles, and generate images of nearly anything you ask it to. It’s not difficult to imagine these faculties being used to make politically-motivated content of all kinds.

You need just at least a little technical literacy to engage with such tools, but otherwise, anyone’s targeted propaganda wish is AI’s command. While AI detection tools already exist and continue to be developed, they’ve demonstrated markedly mixed effectiveness

One extra wrinkle in all this, as Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, points out is that tools like Large Language Models (LLMs) such as ChatGPT and Google Bard are trained on an immense quantity of online data. To the public understanding, there’s no process to pick through and verify the accuracy of any one bit of information, so misinformation and false claims are folded into this.

OpenAI logo on wall

OpenAI recently shut down its own AI detection program, AI Classifier - but do companies creating AI tools have a moral responsibility to help separate man from machine? (Image credit: Shutterstock.com / rafapress)

Fighting the bots

There have also been some reactive efforts made by certain countries to start bringing forth legislation that attempts to begin addressing issues like these, and the tech companies running these services have put in some safeguarding measures.

Is it enough, though? I’m probably not alone in my hesitation to put my worries in this regard to rest, especially considering multiple countries have major elections coming up in the next year.

One such instance where there is a particular concern, highlighted by Panditharatne, is around swathes of content being generated and used to bombard people in order to discourage them from voting. As I mentioned above, it’s possible to automate large amounts of authentic-sounding material to this end, and this could convince someone that they are not able to (or simply shouldn’t) vote. 

That said, reacting may still not be all that effective. While it’s better than not addressing it at all, our memories and attentions are fickle things. Even if we see information that may be more correct or accurate, once we have an initial impression and opinion, it can be hard for our brains to accept it. “The exposure to the initial misinformation is hard to overcome once it happens,” says Chenhao Tan, an assistant professor of computer science at the University of Chicago. 

What can we do about it? 

Content that AI tools have spat out has already spread virally on social media platforms, and the American Association of Political Consultants has cautioned about the “threat to democracy” presented by AI-aided means like deepfaked videos. AI-generated videos and imagery have already been released from the likes of GOP presidential candidate, Ron DeSantis, and the Republican National Committee.

Darrell West from the Center for Technology Innovation, a think tank in Washington D.C., expects to see an increase in AI-created videos, audio, and images to paint political opponents in a bad light. He expressed concerns that voters might “take such claims at face value” and make voting decisions based on false information. 

Trump

A recent 'attack ad' ran by Republican presidential hopeful Ron DeSantis featured the voice of Donald Trump - but it was in fact AI-generated. (Image credit: Alex Wong/Getty Images)

So, now that I’ve loaded your plate with doom and gloom (sorry), what are we to do? Well, West recommends that you make an extra effort to consult a variety of media sources and double-check the veracity of claims, especially bold, decisive statements. He recommends that you “examine the source and see if it is a credible source of information.” 

Heather Kelly of the Washington Post has also written a longer guide on how to critically examine what you are consuming, especially with respect to political material. She recommends starting with your own judgment and considering if what you are consuming is an opportunity for misinformation in the first place and why, take your time to actually process and reflect on what you’re reading, watching, or looking at, and save sources you find helpful and informative to build up a collection you can consult as developments occur. 

In the end, it’s as it always has been: the last bastion against misinformation is always you, the reader, the voter. Although AI tools have made it easier to manufacture falsehoods, it’s ultimately up to us to verify that what we read is fact, not fiction. Bear that in mind the next time you’re watching a political ad - it only takes a minute to do your own research online.

TOPICS
Computing Writer

Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.

She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.