Tag: giants

  • Denmark Tackles Deepfakes with Copyright Law: Exploring Europe’s Legal Arsenal

    Beyond Denmark: A Global Take on AI‑Generated Video Regulation

    Denmark’s new law is making headlines by giving individuals the right to control the use of their own likenesses. But where else in the world are governments cracking down on deepfakes and AI‑generated content? Below is a quick tour of countries that have already enacted—or are drafting—rules around this tech, all presented with a bit of wit and real‑world flavor.

    1. United States

    • California: The Deepfake Disclosure Act (2023) requires that any video with synthetic or altered content be clearly labeled. Think of it as the “video truth label” you’d see on a soda can.
    • Texas: The Fake Video Accountability Act penalizes malicious productions used for defamation or political manipulation. The state even gave a nickname to its law—“Truth in Video Act” (TIVA).
    • Florida: Britney‑the‑law‑made‑law, Florida’s Deepfake Defense Law protect victims from defamatory content and demands that producers obtain clearances before using a person’s feed.

    2. United Kingdom

    The UK’s Deceptive Practice Prevention Act focuses on commercial usage. It says, “No one can sell you a product with a deepfake that misleads the consumer.” For the government‑ sector, the AI Control Framework is in place to monitor political influence.

    3. Germany

    Germany’s “Model Person Rights Act” backs up the existence of a person’s image and ensures that the AI‑generated avatars must be used with explicit permission in both professional contexts and popular media.

    4. France

    • France’s Anti‑Deepfake Code tackles political persuasion. The law protects against the use of altered media that could sway elections.
    • One humorous twist: the law calls the act “The “Zuzu” Act” after a popular French courtroom drama that highlighted the dangers of fake videos.

    5. Australia

    Australia entered the club with its Legal Framework for Artificial Intelligence, Deepfakes, and Misrepresentation Act 2023. It focuses on consent, where anyone can union with an artificial representation at the request of an AI entity.

    6. Canada

    Ontario’s AI Video Regulations demand full disclosure if a human is replaced. The law also prohibits the use of AI‑generated content to defame or mislead other people.

    7. China

    In China, the AI Paint Regulation Act keeps the authority strict. Authorities have emphasized that any artificial content that can confuse the public is strictly regulated. Think of it as a “no-fake-flick” policy for official media.

    Emotional Quirks of the Law

    While these laws are serious (no pun intended), they also carry a lighthearted undertone. “We’re basically giving you the keys to your own digital paparazzi,” jokes one expert. In a world where Lee’s “Blurry Facialities” could become a reality, lawmakers are aiming for a delicate balance between innovation, privacy, and truth.

    Why It Matters

    Deepfakes have become a double‑edged sword: on one side, they can democratize content creation; on the other, they pose a threat. By putting data in tight hands—namely, the people whose faces are used—these laws aim to keep us safe from viral “what‑if” scenarios. From Denmark’s copyright to faces to the US cast of deepfake legislation, each country is taking its own stab at a complex issue.

    Bottom line: the world is moving fast. If you can’t be on the front line of AI‑driven identity control, at least keep an eye on the laws in place to stop the biggest of the deepfake villains. Because, in the end, it’s not just about the image—it’s about the truth behind it.

    Denmark’s New Deepfake Defense: Copyright Your Own Face

    Get ready to put the brakes on the AI‑powered paparazzi. Denmark is rolling out a law that puts a legal shield over your chuckles, your chin, and the way you whine about laundry. Because lately, it seems anyone with a laptop can remix you into a viral cat video or a political campaign ad—without asking.

    What’s the Big Deal?

    • All big parties are on board. No political baggage.
    • Anyone can’t post a deepfake or digital “imitation” of yourself without permission.
    • Statutes are geared to stop misinformation and smudge‑clean bodily and vocal rights.

    The Voice Behind the Vision

    Jakob Engel‑Schmidt, Denmark’s Culture Minister told the press: “We’re giving everyone a clear sign that your body, voice, face are yours to own. This isn’t just about selfies—this is about identity.”

    What Are Deepfakes, Anyway?

    In plain English, a deepfake is an AI‑generated clip that tweaks a person’s likeness—think of it as a digital Photoshop for whole videos. It’s used to spread rumors, create pranks, or hijack a person’s mouth in an unflattering context.

    Technology’s Speed‑Up Challenge

    Engel‑Schmidt warned: “Tech is moving at warp speed. Eventual reality will be hard to spot from fiction. Our new law is a safety net against misinformation, and it tells the tech giants: “Hey, stop the prank war.”

    Europe’s Wider Move

    Denmark isn’t alone. Other European nations are also scripting new legal frameworks to keep deepfakes in check, ensuring your selfies stay your own, not the next viral meme.

    European Union

    EU’s AI Act: Deepfakes and the New “Watch Your Step” Rules

    What the EU’s Four‑Tier Risk System Looks Like

    Under the AI Act, AI‑generated stuff gets sorted into four buckets:

    • Minimal risk – the fine‑print stuff that probably won’t bother anyone.
    • Limited risk – this is where deepfakes land.
    • High risk – think safety‑critical tech (think autonomous cars).
    • Unacceptable risk – the big‑no you’ll see from regulators.

    Why Deepfakes Aren’t Banned, but They’re Still Under Glaring Eyes

    Deepfakes have slipped into the limited‑risk slot. That means:

    • You get to keep making them – no absolute ban.
    • But you must add a visible watermark so viewers know it’s AI‑made.
    • Companies also have to list the training data packs that fed their models.

    Consequences When You Screw Up the Transparency Rules

    If a firm flouts the transparency checklist, the penalties come looking swift:

    • Up to €15 million or 3 % of last year’s global turnover.
    • If it’s a banned practice, that climbs to €35 million or 7 % of turnover.
    That’s Like a Sharp Whammy on Your Bottom Line

    But Wait—There’s More You Should Know

    The Act also takes a keen stab at “manipulative AI”, messing with people’s heads through subliminal tricks or outright deception. If it does that, it faces a full ban.

    Sex–Related Deepfakes: Legal Red‑Flags in the EU

    On the topic of adult content, the new EU directive on violence against women criminalises non‑consensual deepfake creation and manipulation. The key take‑away:

    • No clear penalty structure is outlined – member states decide what’s fair.
    • Implementation deadline? June 2027.

    Bottom Line: Take Care, Or Face Fat‑mounting Fines

    So, whether you’re a startup trying to push the envelope on AI videos or a big platform looking to stay compliant, the message is crystal clear: Label it, disclose the data, and stay out of the “unacceptable” area. Otherwise, you’re looking at a big fine that could hit the very bottom of your balance sheet.

    France’s digital spaces law

    France Tightens the Net on AI‑Made Deepfakes

    In 2024, the French legislature slapped new penalties on anyone who redistributes AI‑generated visual or audio content without the subject’s permission. The goal? Keep folks from getting caught up in stranger‑in‑your‑feed stories (literally).

    What’s the rulebook now?

    • Share a deepfake? Get permission first.
    • If you do share, your post must shout out that it’s AI‑made, so no “who made this?” mystery.
    • Distributors can hit the books: up to 1 year behind bars and a €15,000 fine.
    • Share it via an online platform? The stakes rise to 2 years and a €45,000 fine.
    • Any pornographic deepfake—dirty or not—is off the table, even if it’s labeled “fake.”

    Harder to Do Than You Think

    Got a scoop of a deepfake and want to share? Think again. If you bump it into the public eye without a clear “AI‑generated” flag, you face a prison sentence of up to 3 years and a €75,000 fine.

    Arcom’s New Badge of Power

    France’s audiovisual watchdog, Arcom, now has the authority to compel platforms to pull out illegal content and honor stronger reporting protocols. In short, once a platform shows off a big social‑media stage, at least one more thing has to be added: a big, clear signal that says “This is made by me, not by your cousin’s nonsense AI.”

    Why It Matters
    • Prevents the spread of misleading content.
    • Protects privacy and the dignity of every individual.
    • Encourages platforms to maintain robust moderation.

    Bottom line: In France, if you think you can circus‑share AI‑made content without consent, you’ll likely end up in a cell and paying a hefty fine. The takeaway? Treat deepfakes like you treat your grandma’s privacy—respectfully with a firm “Yes, I have consent.”

    Two-year sentence in the UK for deepfake porn

    Britain Tightens Rules on Deepfake Porn

    From Unthinkable Fantasies to Real‑World Consequences

    The United Kingdom is sharpening its legal defenses against a new breed of digital wickedness—deepfake pornography. Since the passage of the Data (Use and Access) Bill, lawmakers have stepped in to make it crystal‑clear that anyone who manipulates someone’s image for sexual gratification or to cause distress is looking at a serious legal backlash.

    • Unlimited fines can be slapped on anyone who creates these “heinous abusers” content.
    • Under the Sexual Offences Act, an offender could face up to two years in prison for producing sexual deepfakes.
    • The Online Safety Act bans the sharing—or even threatening to share—non‑consensual sexual images on social media, and gives platforms the duty to proactively remove this material before it shows up.

    If a platform flouts the law, Ofcom can impose fines of up to 10 % of its global revenue. That’s a big hit on the big players, but the real hammer is held over the creation side.

    However, a reality check comes from Julia Hörnle, a professor at Queen Mary University’s School of Law. She points out that the Online Safety Act does not outright ban the creation of deepfake images, meaning victims can still suffer harm even if the content never gets shared publicly. She calls for a new approach that criminalises the entire ecosystem: development, distribution, and promotion of AI tools that make deepfakes possible.

    What Does It All Mean for Ordinary Citizens?

    In plain English, if you’re caught spreading deepfake porn on platforms, you could walk away with a hefty fine or even a prison term. And if tech companies don’t act swiftly to pull those images off their servers, they’re also liable. The real trick is that if you’re the one creating them behind closed doors, you’re still in hot water because authorities are now stepping up to curb the tools and methods that make the whole business model feasible.

  • Trump Criticizes Court Decision To Block Major Arizona Copper Mine Land Transfer

    Trump Criticizes Court Decision To Block Major Arizona Copper Mine Land Transfer

    Authored by Austin Alonzo via The Epoch Times (emphasis ours),

    A federal appeals court has temporarily blocked a land transfer for a major Arizona copper mine, prompting a rebuke from President Donald Trump.

    Campers utilize Oak Flat Campground in the Tonto National Forest, in Miami, Ariz., on June 9, 2023. Matt York/AP Photo

    On Aug. 18, the U.S. Court of Appeals for the Ninth Circuit issued a temporary administrative injunction to halt a congressionally mandated land exchange that would have given control of a large tract of land in Tonto National Forest in Arizona to international mining giants Rio Tinto and BHP.

    The court’s order stated that it was taking no position on the merits of the case but was acting to “preserve the status quo” as it expedites a review of the legal challenge brought by the San Carlos Apache Tribe and other plaintiffs.

    In a post on Truth Social, Trump criticized the delay, saying the project was needed to create 3,800 jobs and secure a vital resource.

    Our Country, quite simply, needs Copper—AND NOW!” he said in an Aug. 19 post.

    “It is so sad that Radical Left Activists can do this, and affect the lives of so many people. Those that fought it are Anti-American, and representing other Copper competitive Countries.”

    The ruling came shortly after Trump met with the CEOs of the two companies at the White House, a meeting that highlighted his administration’s support for the mine.

    The United States Forest Service is listed as a defendant alongside Resolution Copper Mining LLC.

    The appeals court is scheduled to hear arguments for the case in September.

    On July 30, Trump signed an executive order creating a 50 percent tariff on imports of semi-finished copper products and intensive copper derivative products. The same order raised the possibility of further tariffs on imported copper in the future.

    In a statement, Resolution Copper, a joint venture between Rio Tinto and BHP, called the injunction a temporary pause so the court can consider “eleventh hour motions” by the San Carlos Apache Tribe and other plaintiffs.

    We are confident the court will ultimately affirm the district court’s well-reasoned orders explaining in detail why the congressionally directed land exchange satisfies all applicable legal requirements,” the statement said.

    The statement said the proposed mine has the potential to become one of the largest copper mines in the United States, “contributing $1 billion annually to Arizona’s economy and creating thousands of local jobs.”

    In a statement he posted on LinkedIn on Aug. 19, BHP CEO Mike Henry thanked the Trump administration for its “strong leadership to reinvigorate mining and processing supply chains in and for America.”

    The tribal leader at the center of the court case, San Carlos Apache Tribal Chairman Terry Rambler, responded directly to Trump’s comments in a Facebook post, saying that the tribe is “protecting America’s interests.” Rambler stated that the president’s comments “mirror misinformation” from foreign mining interests.

    He said the project was a “rip-off” that would allow companies to extract billions in copper while paying “almost no royalties” to the federal government.

    Rambler’s post also noted that Rio Tinto’s largest shareholder is a company owned by the Chinese government, and he alleged the copper would be shipped to China. The chairman reiterated the tribe’s primary concerns that the mine “will destroy a sacred area, decimates our environment, [and] threatens our water rights.”

    On its website, Resolution Copper listed detailed responses to Rambler’s allegations.

    “While both companies have operations and investors on nearly every continent around the world, Rio Tinto and BHP are committed to being transparent, ethical, and responsible corporations that provide the materials that shape modern society,” a statement posted on an undated “Myths and Facts” page states.

    Rio Tinto is a British and Australian mining company traded publicly on multiple global exchanges. BHP is an Australian company publicly traded on that country’s Australian Securities Exchange.

    The land transfer for the mining project was originally approved by Congress and signed off on by then-President Barack Obama in 2014. Various tribal interests and environmental groups have fought the transfer for years.

    Loading recommendations…