Google Tests Watermark to Identify AI Images

Google is currently testing a digital watermark developed by its AI arm, DeepMind, in an effort to combat disinformation by identifying images created by artificial intelligence (AI). The watermark, called SynthID, works by embedding changes to individual pixels in images that are invisible to the human eye but detectable by computers. However, DeepMind has acknowledged that the technology is not entirely foolproof against extreme manipulation of images.

As AI technology progresses, it is becoming increasingly difficult to distinguish between real and artificially generated images. AI image generators, such as Midjourney, have gained popularity, allowing users to create images within seconds by using simple text instructions. This has raised concerns regarding copyright and ownership issues.

Google, already having its own image generator called Imagen, will only use its system for creating and checking watermarks on images generated using this tool. Traditional forms of watermarks, such as logos or text added to images, are not suitable for identifying AI-generated images as they can be easily edited or cropped out.

Google’s system creates an effectively invisible watermark that enables instant identification of whether an image is real or machine-generated. The software can even detect the watermark’s presence after the image has been cropped or edited, making it highly robust.

In July, Google and six other leading AI companies signed a voluntary agreement in the US to ensure the safe development and use of AI, which includes implementing watermarks to help people identify computer-generated images. However, there is a need for more coordination between businesses and standardization in this area.

Other tech companies, including Microsoft and Amazon, have also pledged to use watermarks on AI-generated content. Furthermore, Meta, the parent company of Facebook, has published a research paper discussing its plans to add watermarks to generated videos to address transparency concerns over AI-generated works.

China has even implemented a ban on AI-generated images without watermarks, with companies like Alibaba using watermarks on creations made using its cloud division’s text-to-image tool.

The development of watermarks for AI images is a significant step in combating the spread of disinformation and ensuring the transparency and accountability of AI-generated content.