X

Meta wants you to disclose when you post AI content, or it might remove it

Featured image for Meta wants you to disclose when you post AI content, or it might remove it

There’s no avoiding it; AI-generated content is pretty much overrunning the internet. Companies like Google and Adobe are looking to make it easier to tell when content is generated with AI. According to a new report, Meta wants users to disclose when they post AI content. If not, they run the risk of having it removed,

It’s hard to go anywhere on the internet without seeing some sort of AI content. Instagram is rife with AI-generated images, YouTube is loaded with AI-generated videos, Etc. This is why companies are looking to create tools to better identify when content is synthetic.

Google has a watermarking system called SynthID to identify when people make content using its image generator. The company also extended that to audio files. Meta also has a watermarking system that applies to images made with its Imagine AI generator. Some AI-generated media is obviously AI-generated, but a lot of content posted nowadays can fool the masses.

That’s a problem as is. However, 2024 is an election year, and that makes AI a fertile breeding ground for misinformation and deceiving imagery. This is why it’s crucial that these companies make it easy to identify AI content.

Meta wants users to disclose if they post AI content

Meta owns Facebook, Instagram, and Threads. These are three applications that have seen a ton of AI-generated content. It’s to the point where people fear it will overrun these platforms. Meta is working on tools to help people identify when AI was used to make content. It will be applied to content made using Google, Adobe, MidJourney, Etc. Creating tools that will detect artificially created content is a massive challenge.

Well, the company is enlisting help from people making the posts to help identify AI content. The company will start requiring folks to disclose if the content is made with AI. If they fail to do that, and Meta identifies that it is AI-generated, the creator could face some consequences. Nick Clegg, Meta’s president of Global Affairs, said that if a user does not disclose their AI content, then the “range of penalties that will apply will run the full gamut from warnings through to removal.”

This is a step in the right direction

The companies supplying us with the tools to create AI-generated content are now giving us tools to help identify the content. This is definitely a step in the right direction, as there’s no telling what effect AI generic contact will have on the tech community. Sure, harmless social media posts and AI filter selfies are not harmful. However, there could be some major consequences if AI-generated content becomes too real.

We’ve already seen AI being used to mimic public figures and have them doing or saying unsavory things. One example is the current controversy with AI generator pornography of Taylor Swift.

Being able to properly police this content is one of the most crucial things that tech companies can do. If they drop the ball on this, then we’ll just have to wait and find out the consequences the hard way

  翻译: