Dean Barber’s Post

View profile for Dean Barber, graphic

Getting smarter about Mexico

The explosion of artificial-intelligence technology makes it easier than ever to deceive people on the internet, and is turning the 2024 U.S. presidential election into an unprecedented test on how to police deceptive content. An early salvo was fired last month in New Hampshire. Days before the state’s presidential primary, an estimated 5,000 to 25,000 calls went out telling recipients not to bother voting. “Your vote makes a difference in November, not this Tuesday,” the voice said. It sounded like President Biden, but it was created by AI, according to an analysis by security firm Pindrop. The message also discouraged independent voters from participating in the Republican primary. When Pindrop analyzed the audio, they found telltale signs the call was phony. Two weeks later, the New Hampshire attorney general’s office said it identified a Texas-based company named Life Corp. as the source of the calls and that it issued a cease-and-desist order citing the law against voter suppression. With recent advances in generative AI, virtually anyone can create increasingly convincing but fake images, audio and videos, as well as fictional social-media users and bots that appear human. Around 70 countries estimated to cover nearly half the world’s population—roughly four billion people—are set to hold national elections this year, according to the International Foundation for Electoral Systems. OpenAI Chief Executive Sam Altman said at a Bloomberg event in January during the World Economic Forum’s annual meeting in Davos, Switzerland, that while OpenAI is preparing safeguards, he’s still wary about how his company’s tech might be used in elections. “We’re going to have to watch this incredibly closely this year,” Altman said. OpenAI says it is prohibiting the use of its tools for political campaigning; encoding details about the provenance of images generated by its Dall-E tool; and addressing questions about how and where to vote in the U.S. with a link to CanIVote.org, operated by the National Association of Secretaries of State. People who’ve studied elections debate how much an AI deepfake could actually sway someone’s vote, especially in America where most people say they’ve likely already decided who they’ll support for president. Yet the very possibility of AI-generated fakes could also muddy the waters in a different way by leading people to question even real images and recordings. Social-media giants have been struggling for years with questions around political content. In 2020, they went to aggressive lengths to police political discourse, partly in response to reports of Russian interference in the U.S. election four years earlier. Since his 2022 acquisition of Twitter, Elon Musk has renamed the site X and rolled back many of its previous restrictions in the name of free speech. 

New Era of AI Deepfakes Complicates 2024 Elections

New Era of AI Deepfakes Complicates 2024 Elections

wsj.com

To view or add a comment, sign in

Explore topics