Preventing AI Misuse in the Democratic Process
This month, I volunteered to be a poll watcher at our local Presidential Preference Primary. For many people, it is a privilege it is to participate in the democratic process. Trust will be tested like never before in upcoming elections across the globe. In 2024, there will be elections in more than 60 countries, including the United States, India, UK, Germany, India, Indonesia, Mexico, and Taiwan. In no other year in history have so many people—approximately half the world’s population, in fact—had an opportunity to vote.
At the same time, advanced AI is creating realistic text, audio, and images that threaten to undermine trust in these democratic elections. In New Hampshire, the attorney generals’ office is investigating a robocall that impersonated President Joe Biden and told recipients not to vote in Tuesday's presidential primary. "Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications," the attorney general's office said in a statement .
The Brennan Center for Justice warns that this is the first national campaign season in which such widely accessible AI tools allow users to synthesize audio in anyone’s voice, generate photo-realistic images of anybody doing nearly anything, and power social media bot accounts with near human-level conversational abilities—and do all this on a vast scale and with relatively little money and time. A report issued by the Brennan Center concluded: “Due to the popularization of chatbots and the search engines they are quickly being absorbed into, it will also be the first election season in which large numbers of voters routinely consume information that is not just curated by AI but is produced by AI.”
In response, collaborative efforts have formed to develop technologies that identify misleading AI-generated materials and to create campaigns to teach the public to discern such content. Strategies include watermarking and embedding metadata to certify the origin of digital content. At the recent Munich Security Conference , 20 tech companies —including Anthropic, Google, Meta, Microsoft, and OpenAI—signed the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” While the accord is voluntary and does not prohibit the use of AI in election campaigns, it includes a robust set of commitments to deploy technology to counter harmful AI-generated content meant to deceive voters.
Individual companies are also taking action to combat abuse. OpenAI’s image generator, DALL·E, has guardrails to decline requests that ask for image generation of real people, including candidates. “Until we know more, we don’t allow people to build applications for political campaigning and lobbying,” WHO AT OpenAI told WHO. “We don’t allow builders to create chatbots that pretend to be real people or institutions.”
Recommended by LinkedIn
Government regulators are also taking steps to increase trust in elections. The Federal Communications Commission recently proposed that robocalls generated by AI be made illegal. “AI-generated voice cloning and images are already sowing confusion by tricking consumers,” said FCC Chairwoman Jessica Rosenworcel. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”
Local and state officials need to play a more significant role in protecting democratic processes. The Brookings Institute says that it falls to state election officials to lead the current debates around AI’s deployment and oversight. In particular, Brookings warns that state officials must “apply focused oversight on generative AI, especially election-related AI chatbots that can serve to discourage and, in some instances, disenfranchise voters.”
I was happy to play my small part as an election observer. As the lines between reality and AI-generated content are increasingly blurred, safeguarding the integrity has never been more critical. And as we approach the 2024 presidential elections, we in the tech industry should redouble our efforts to provide tools to discern truth from falsehood, uphold the sanctity of our democratic process, and do all we can to secure faith in the system.
A pleasure working with you at the polls, Elena.