OpenAI, Google, and Anthropic Commit to ‘AI Watermarking’ to Combat Deepfakes

Please note that we are not authorised to provide any investment advice. The content on this page is for information purposes only.

In a major move to tackle the growing threat of AI-generated misinformation, leading tech firms including OpenAI, Google, and Anthropic have agreed to implement digital “watermarking” techniques to identify content created by artificial intelligence. This collaborative effort comes amid increasing concerns over the rise of deepfakes and AI-generated media that can mislead the public, especially in politically sensitive or high-stakes scenarios.

Announced as part of an international initiative involving the White House and other global partners, this agreement aims to establish a new standard for transparency in AI-generated content. The idea is simple but powerful: all AI-created audio, image, and video content should carry an invisible, traceable signature—a watermark—that makes it easier for platforms, users, and watchdogs to verify its origin.

With major elections looming across several countries and the deepfake technology becoming more accessible and realistic, the urgency of such tools has become undeniable. Experts warn that untraceable AI-generated media could be used to manipulate public opinion, impersonate public figures, or spread false narratives at scale. By embedding watermarks, companies hope to add a layer of accountability to the content their AI models produce.

OpenAI, the maker of ChatGPT and Sora, confirmed it will integrate watermarking techniques across all its media-generating models. Google, with its Gemini and DeepMind products, said it is already testing AI watermarking solutions that are resistant to tampering. Anthropic also joined the pledge, saying it will embed provenance information in its Claude-generated outputs wherever feasible.

The watermarking tech itself is still being refined. While traditional watermarks can be removed or altered, AI watermarking involves sophisticated cryptographic or algorithmic signals that are hard to erase without damaging the content itself. These signals are designed to be detected by verification tools but remain invisible to the human eye or ear.

Social media giants like Meta and TikTok have also expressed interest in incorporating detection systems that flag or label AI-generated content, especially during election seasons. YouTube has already begun requiring creators to disclose when their videos contain synthetic or altered media.

Still, the initiative isn’t without criticism. Privacy advocates warn that excessive monitoring or watermarking could lead to overreach or misuse by authoritarian regimes. Others question whether these measures will be effective if bad actors simply use open-source or unregulated AI tools that skip watermarking entirely.

Nevertheless, this is a step forward in an evolving conversation about AI ethics, safety, and responsibility. As generative AI becomes more powerful and pervasive, transparency will likely become a foundational pillar in building public trust.

About Ali Raza PRO INVESTOR

Ali is a professional journalist with experience in Web3 journalism and marketing. Ali holds a Master's degree in Finance and enjoys writing about cryptocurrencies and fintech. Ali’s work has been published on a number of leading cryptocurrency publications including Capital.com, CryptoSlate, Securities.io, Invezz.com, Business2Community, BeinCrypto, and more.