Artificial intelligence is woven into almost every corner of life that we touch, from apps to ads, newsfeeds and security systems.
Modern generative models can produce photorealistic images and video, convincing voice clones, pages of coherent prose, and tailored disinformation at scale. That capability has supercharged creative workflows and productivity.
However, this comes at as cost. It has also made deception, misinformation, and biased automated decisions far easier and cheaper to produce — think deepfakes and voice cloning, discriminatory beliefs, propaganda and financial scams.
As a result, regulators and corporate compliance teams face an urgent question: can policy and corporate governance realistically keep pace with the speed of technical progress?
Let’s take a closer look.
Three reasons they can’t keep up
1) Advancement is rapid & multi-dimensional
Large language models, multimodal systems and generative image/video/audio models have shown rapid improvements across many benchmarks within just a few years. The rapid nature of advancement is outpacing the normal cadence of lawmaking and corporate policy updates.
The 2024 Stanford AI Index documents that AI has surpassed human performance on many benchmark tasks and that capability improvements happen quickly across research and industry. This makes it hard for static rules to remain appropriate for long.
2) AI is weaponized faster than rules are written
Real-world scams and disinformation campaigns show how quickly new generative tools are adopted for harm.
In 2024, UK engineering firm Arup suffered a $25 million loss in an AI-fuelled scam. A deep fake video of the company’s CFO spurred a Hong-Kong based employee to conduct the transaction.
High-profile incidents, like what happened to Arup, illustrate that criminals and opportunists adopt generative techniques fast and at scale. Platforms and laws often react after these harms become visible, not before.
3) Detection tools have limits & adversaries adapt
Technical defences such as detection algorithms, metadata provenance and watermarking exist, but they are imperfect and subject to evasion.
NIST’s survey of technical approach to synthetic content highlights both useful techniques and the limits: watermarks can be stripped or broken, metadata can be forged, and classifiers degrade when content is edited or translated.
Meanwhile, attackers use adversarial techniques to defeat detectors. That persistent “arms race” means detection alone cannot be a silver bullet.
Picking up the pace
1) Aggressive policy frameworks are emerging
Regulatory systems are not entirely absent. The EU’s Artificial Intelligence Act is the first comprehensive law that categorizes AI risk and sets rules for “high-risk” systems, transparency, and prohibited uses.
Published in 2024, the Act creates legal obligations for developers and providers. This offers regulators enforcement mechanisms that, if adopted globally or mirrored elsewhere, can raise the compliance bar substantially.
That’s a structural advantage lawmakers can use.
2) Industry standards & technical provenance can scale
Major tech firms and standards coalitions are building interoperable provenance and watermarking systems — like the Coalition for Content Provenance and Authenticity (C2PA) and other content-credential efforts.
Companies such as OpenAI, Adobe, Google and social platforms have joined or support these standards. Other platforms, such as TikTok and Meta, have piloted labelling or content credentials.
This shows a practical, scalable pathway to flag or trace synthetic content in the distribution chain — reducing the ability of bad actors to anonymously amplify sophisticated fakes.
These are imperfect today, but they provide operational levers for platforms and regulators to require transparency.
3) Establishment of enhanced technical mitigation & detection
Research labs, standards bodies and specialized companies like Sensity, Reality Defender and university lab, are producing practical detection tools and operational guidance.
NIST has published technical overviews and standards work on watermarking, detection, and provenance.
Google open-sourced a watermarking approach (SynthID) for text; and many vendors now offer detection services that can be integrated into corporate security workflows or platform moderation pipelines.
Combining legal requirements, such as mandatory provenance, with technically sound detection and corporate compliance programs can materially reduce harms even if it doesn’t eliminate them.
It’s a marathon, not a race
AI capabilities are improving at a lightning-fast pace — and malicious adopters are standing by, waiting to catch it in a bottle for nefarious use.
Lawmakers and corporations have shown a pattern of playing catch-up and reacting to damage as it appears.
However, there are clear, concrete steps and tools that can help prevent and protect us from digital harm.
Ultimately, keeping up with AI won’t come from a single law, a single detector, or a single corporate policy.
It requires a layered strategy: Up-to-date regulation that focuses on risk and accountability Industry standards and technological provenance Operational detection Security practices and training Public awareness and timely enforcement.
Vigilance is needed. Unfortunately, when it comes to deep fake content especially, we can no longer fully believe what we see with our own eyes. That is a tough pill to swallow. However, we can adapt and sharpen our senses to see what lies beneath.
Technology evolves, adversaries adapt, and policy cycles are often slower than research cycles. But with coordinated public-private action and ongoing technical investment, it’s possible to reduce risks significantly.
For more stories like this, click here.



