Google’s new tool can detect AI-generated images, but it’s not that simple

[ad_1]

Google DeepMind's tool to detect AI-generated images

The tool can detect AI-generated images even after editing, changing colors, or adding filters.

Google DeepMind

Images generated by artificial intelligence tools are becoming harder to distinguish from those humans have created. AI-generated images can proliferate misinformation in massive proportions, leading to the irresponsible use of AI. To that purpose, Google unveiled a new SynthID tool that can differentiate AI-generated images from human-created ones.

The tool, created by the DeepMind team, adds an imperceptible digital watermark to AI-generated images — like a signature. The same tool can later detect this watermark to point out which images were created by AI, even after modifications, like adding filters, compressing, changing colors, and more. 

Also: How Google, UCLA are prompting AI to choose the next action for a better answer

SynthID combines two deep learning models into one tool. One visually adds the watermark to the original content in an imperceptible manner to the naked eye and another identifies the watermarked images. 

Currently, SynthID cannot detect all AI-generated images, as it is limited to those created with Google’s text-to-image tool, Imagen. But this is a sign of a promising future for responsible AI, especially if other companies adopt SynthID into their generative AI tools. 

Also: Google’s AI-powered search summary now points you to its online sources

The tool will gradually roll out to Vertex AI customers using Imagen and is only available on this platform. However, Google DeepMind hopes to make it available in other Google products and to third parties soon.



[ad_2]

Source link