Google DeepMind Trials Watermarks For AI Images
Google DeepMind develops tool to identify images generated by machines, as tech industry seeks safeguards for AI
Alphabet is making good on its commitment to implement voluntary safeguards to the risks posed by artificial intelligence (AI).
London-based Google DeepMind announced on Tuesday that it is “launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images.”
It comes after big name tech firms agreed with the Biden Administration in July to implement voluntary safeguards to the risks posed by AI.
AI safeguards
The move came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.
President Biden in July had met with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI .
He secured from them “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
The tech giants had pledged to ensuring products are:
- safe before introducing them to the public;
- they would build systems that put security first;
- and finally earn the public’s trust, which includes developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.
AI Watermarking tool
Google DeepMind in its blog post admitted that AI-generated images are becoming more popular every day, but identifying them can be a challenge, especially when they look so realistic.
To this end Google DeepMind in partnership with Google Cloud, launched a beta version of SynthID, a tool for watermarking and identifying AI-generated images.
“This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” said Google DeepMind. “SynthID is being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images.”
Google Cloud is the first cloud provider to offer a tool for creating AI-generated images responsibly and identifying them with confidence.
However SynthID isn’t foolproof against extreme image manipulations, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly, the firm stated.
This tool could also evolve alongside other AI models and modalities beyond imagery such as audio, video, and text it added.
Metadata proof
Traditional watermarks aren’t sufficient for identifying AI-generated images because they’re often applied like a stamp on an image and can easily be edited out.
So Google DeepMind designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes – most commonly used for JPEGs.
SynthID uses two deep learning models – for watermarking and identifying – that have been trained together on a diverse set of images. The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content.
One of the most widely used methods of identifying content is through metadata, but since SynthID’s watermark is embedded in the pixels of an image, it’s compatible with other image identification approaches that are based on metadata, and remains detectable even when metadata is lost.
The UK government recently that its international AI Safety Summit is to be held at the iconic and world famous Bletchley Park on 1 and 2 November 2023.
The AI Safety Summit is the first major international summit of its kind on the safe use of artificial intelligence, and will host talks “to explore and build consensus on rapid, international action to advance safety at the frontier of AI technology.”