Beyond Denmark: A Global Take on AI‑Generated Video Regulation
Denmark’s new law is making headlines by giving individuals the right to control the use of their own likenesses. But where else in the world are governments cracking down on deepfakes and AI‑generated content? Below is a quick tour of countries that have already enacted—or are drafting—rules around this tech, all presented with a bit of wit and real‑world flavor.
1. United States
- California: The Deepfake Disclosure Act (2023) requires that any video with synthetic or altered content be clearly labeled. Think of it as the “video truth label” you’d see on a soda can.
- Texas: The Fake Video Accountability Act penalizes malicious productions used for defamation or political manipulation. The state even gave a nickname to its law—“Truth in Video Act” (TIVA).
- Florida: Britney‑the‑law‑made‑law, Florida’s Deepfake Defense Law protect victims from defamatory content and demands that producers obtain clearances before using a person’s feed.
2. United Kingdom
The UK’s Deceptive Practice Prevention Act focuses on commercial usage. It says, “No one can sell you a product with a deepfake that misleads the consumer.” For the government‑ sector, the AI Control Framework is in place to monitor political influence.
3. Germany
Germany’s “Model Person Rights Act” backs up the existence of a person’s image and ensures that the AI‑generated avatars must be used with explicit permission in both professional contexts and popular media.
4. France
- France’s Anti‑Deepfake Code tackles political persuasion. The law protects against the use of altered media that could sway elections.
- One humorous twist: the law calls the act “The “Zuzu” Act” after a popular French courtroom drama that highlighted the dangers of fake videos.
5. Australia
Australia entered the club with its Legal Framework for Artificial Intelligence, Deepfakes, and Misrepresentation Act 2023. It focuses on consent, where anyone can union with an artificial representation at the request of an AI entity.
6. Canada
Ontario’s AI Video Regulations demand full disclosure if a human is replaced. The law also prohibits the use of AI‑generated content to defame or mislead other people.
7. China
In China, the AI Paint Regulation Act keeps the authority strict. Authorities have emphasized that any artificial content that can confuse the public is strictly regulated. Think of it as a “no-fake-flick” policy for official media.
Emotional Quirks of the Law
While these laws are serious (no pun intended), they also carry a lighthearted undertone. “We’re basically giving you the keys to your own digital paparazzi,” jokes one expert. In a world where Lee’s “Blurry Facialities” could become a reality, lawmakers are aiming for a delicate balance between innovation, privacy, and truth.
Why It Matters
Deepfakes have become a double‑edged sword: on one side, they can democratize content creation; on the other, they pose a threat. By putting data in tight hands—namely, the people whose faces are used—these laws aim to keep us safe from viral “what‑if” scenarios. From Denmark’s copyright to faces to the US cast of deepfake legislation, each country is taking its own stab at a complex issue.
Bottom line: the world is moving fast. If you can’t be on the front line of AI‑driven identity control, at least keep an eye on the laws in place to stop the biggest of the deepfake villains. Because, in the end, it’s not just about the image—it’s about the truth behind it.
Denmark’s New Deepfake Defense: Copyright Your Own Face
Get ready to put the brakes on the AI‑powered paparazzi. Denmark is rolling out a law that puts a legal shield over your chuckles, your chin, and the way you whine about laundry. Because lately, it seems anyone with a laptop can remix you into a viral cat video or a political campaign ad—without asking.
What’s the Big Deal?
- All big parties are on board. No political baggage.
- Anyone can’t post a deepfake or digital “imitation” of yourself without permission.
- Statutes are geared to stop misinformation and smudge‑clean bodily and vocal rights.
The Voice Behind the Vision
Jakob Engel‑Schmidt, Denmark’s Culture Minister told the press: “We’re giving everyone a clear sign that your body, voice, face are yours to own. This isn’t just about selfies—this is about identity.”
What Are Deepfakes, Anyway?
In plain English, a deepfake is an AI‑generated clip that tweaks a person’s likeness—think of it as a digital Photoshop for whole videos. It’s used to spread rumors, create pranks, or hijack a person’s mouth in an unflattering context.
Technology’s Speed‑Up Challenge
Engel‑Schmidt warned: “Tech is moving at warp speed. Eventual reality will be hard to spot from fiction. Our new law is a safety net against misinformation, and it tells the tech giants: “Hey, stop the prank war.”
Europe’s Wider Move
Denmark isn’t alone. Other European nations are also scripting new legal frameworks to keep deepfakes in check, ensuring your selfies stay your own, not the next viral meme.
European Union
EU’s AI Act: Deepfakes and the New “Watch Your Step” Rules
What the EU’s Four‑Tier Risk System Looks Like
Under the AI Act, AI‑generated stuff gets sorted into four buckets:
- Minimal risk – the fine‑print stuff that probably won’t bother anyone.
- Limited risk – this is where deepfakes land.
- High risk – think safety‑critical tech (think autonomous cars).
- Unacceptable risk – the big‑no you’ll see from regulators.
Why Deepfakes Aren’t Banned, but They’re Still Under Glaring Eyes
Deepfakes have slipped into the limited‑risk slot. That means:
- You get to keep making them – no absolute ban.
- But you must add a visible watermark so viewers know it’s AI‑made.
- Companies also have to list the training data packs that fed their models.
Consequences When You Screw Up the Transparency Rules
If a firm flouts the transparency checklist, the penalties come looking swift:
- Up to €15 million or 3 % of last year’s global turnover.
- If it’s a banned practice, that climbs to €35 million or 7 % of turnover.
That’s Like a Sharp Whammy on Your Bottom Line
But Wait—There’s More You Should Know
The Act also takes a keen stab at “manipulative AI”, messing with people’s heads through subliminal tricks or outright deception. If it does that, it faces a full ban.
Sex–Related Deepfakes: Legal Red‑Flags in the EU
On the topic of adult content, the new EU directive on violence against women criminalises non‑consensual deepfake creation and manipulation. The key take‑away:
- No clear penalty structure is outlined – member states decide what’s fair.
- Implementation deadline? June 2027.
Bottom Line: Take Care, Or Face Fat‑mounting Fines
So, whether you’re a startup trying to push the envelope on AI videos or a big platform looking to stay compliant, the message is crystal clear: Label it, disclose the data, and stay out of the “unacceptable” area. Otherwise, you’re looking at a big fine that could hit the very bottom of your balance sheet.
France’s digital spaces law
France Tightens the Net on AI‑Made Deepfakes
In 2024, the French legislature slapped new penalties on anyone who redistributes AI‑generated visual or audio content without the subject’s permission. The goal? Keep folks from getting caught up in stranger‑in‑your‑feed stories (literally).
What’s the rulebook now?
- Share a deepfake? Get permission first.
- If you do share, your post must shout out that it’s AI‑made, so no “who made this?” mystery.
- Distributors can hit the books: up to 1 year behind bars and a €15,000 fine.
- Share it via an online platform? The stakes rise to 2 years and a €45,000 fine.
- Any pornographic deepfake—dirty or not—is off the table, even if it’s labeled “fake.”
Harder to Do Than You Think
Got a scoop of a deepfake and want to share? Think again. If you bump it into the public eye without a clear “AI‑generated” flag, you face a prison sentence of up to 3 years and a €75,000 fine.
Arcom’s New Badge of Power
France’s audiovisual watchdog, Arcom, now has the authority to compel platforms to pull out illegal content and honor stronger reporting protocols. In short, once a platform shows off a big social‑media stage, at least one more thing has to be added: a big, clear signal that says “This is made by me, not by your cousin’s nonsense AI.”
Why It Matters
- Prevents the spread of misleading content.
- Protects privacy and the dignity of every individual.
- Encourages platforms to maintain robust moderation.
Bottom line: In France, if you think you can circus‑share AI‑made content without consent, you’ll likely end up in a cell and paying a hefty fine. The takeaway? Treat deepfakes like you treat your grandma’s privacy—respectfully with a firm “Yes, I have consent.”
Two-year sentence in the UK for deepfake porn
Britain Tightens Rules on Deepfake Porn
From Unthinkable Fantasies to Real‑World Consequences
The United Kingdom is sharpening its legal defenses against a new breed of digital wickedness—deepfake pornography. Since the passage of the Data (Use and Access) Bill, lawmakers have stepped in to make it crystal‑clear that anyone who manipulates someone’s image for sexual gratification or to cause distress is looking at a serious legal backlash.
- Unlimited fines can be slapped on anyone who creates these “heinous abusers” content.
- Under the Sexual Offences Act, an offender could face up to two years in prison for producing sexual deepfakes.
- The Online Safety Act bans the sharing—or even threatening to share—non‑consensual sexual images on social media, and gives platforms the duty to proactively remove this material before it shows up.
If a platform flouts the law, Ofcom can impose fines of up to 10 % of its global revenue. That’s a big hit on the big players, but the real hammer is held over the creation side.
However, a reality check comes from Julia Hörnle, a professor at Queen Mary University’s School of Law. She points out that the Online Safety Act does not outright ban the creation of deepfake images, meaning victims can still suffer harm even if the content never gets shared publicly. She calls for a new approach that criminalises the entire ecosystem: development, distribution, and promotion of AI tools that make deepfakes possible.
What Does It All Mean for Ordinary Citizens?
In plain English, if you’re caught spreading deepfake porn on platforms, you could walk away with a hefty fine or even a prison term. And if tech companies don’t act swiftly to pull those images off their servers, they’re also liable. The real trick is that if you’re the one creating them behind closed doors, you’re still in hot water because authorities are now stepping up to curb the tools and methods that make the whole business model feasible.

