Experts from the artificial intelligence (AI) industry, as well as tech executives, have warned about the dangers of AI deepfakes.
The letter, entitled “Disrupting the Deepfake Supply Chain”, calls for more regulation for AI deepfakes, and was signed by over 440 people including one of the godfathers of AI, Yoshua Bengio, and other academics, as well as Frances Haugen (former Facebook whistleblower), a research scientist at Google Deepmind, a researcher from OpenAI, and Jaan Tallinn (cofounder of Skype).
Dr Geoffrey Hinton, Yoshua Bengio and Yann LeCun, are considered by many to be the three godfathers of artificial intelligence (AI) due to their many years of developing AI and deep learning.
“Today, deepfakes often involve sexual imagery, fraud, or political disinformation,” states the letter. “Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed for the functioning and integrity of our digital infrastructure.”
“Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes,” the letter states.
The letter calls for new laws, including:
If designed wisely, such laws could nurture socially responsible businesses, and would not need to be excessively burdensome.
Deepfakes are realistic but fabricated images, audios and videos created by AI algorithms. Recent developments have made them more and more indistinguishable from human-created content.
It comes after OpenAI last week launched a new tool that can create short form videos simply from text instructions, which could interest content creators, but also have a significant impact on the digital entertainment market.
The problem posed by deepfakes has been known for a while now.
In early 2020 Facebook announced it would remove deepfake and other manipulated videos from its platform, but only if it met certain criteria.
Then in September 2020, Microsoft released a software tool that could identify deepfake photos and videos in an effort to combat disinformation.
The risks associated with deepfake videos was demonstrated in March 2022, when both Facebook and YouTube removed a deepfake video of Ukranian President Volodymyr Zelensky, in which he appeared to tell Ukranians to put down their weapons as the country resists Russia’s illegal invasion.
Deepfake cases have also involved Western political leaders, after images of former US Presidents Barak Obama and Donald Trump were used in a various misleading videos.
More recently in January 2024 US authorities began an investigation when a robocall received by a number of voters, seemingly using artificial intelligence to mimic Joe Biden’s voice was used to discourage people from voting in a primary election in the US.
Also last month AI-generated explicit images of the singer Taylor Swift were viewed millions of times online.
Last July the Biden administration had announced a number of big name players in the artificial intelligence market had agreed voluntary AI safeguards.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI made a number of commitments, and one of the most notable surrounds the use of watermarks on AI generated content such as text, images, audio and video, amid concern that deepfake content can be utilised for fraudulent and other criminal purposes.
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…