Tag: define

  • The EU AI Act aims to create a level playing field for AI innovation: Here's what it is

    The European Union’s Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as “the world’s first comprehensive AI law.” After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU.

    The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool and to a bank that buys that tool. Now all of these parties have a legal framework that sets the stage for their use of AI.

    Why does the EU AI Act exist?

    As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. Now that the regulation is in place, it should “ensure the free movement, cross-border, of AI-based goods and services” without diverging local restrictions.

    With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn’t do for society more broadly.

    What is the purpose of the EU AI Act?

    According to European lawmakers, the framework’s main goal is to “promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.” 

    Yes, that’s quite a mouthful, but it’s worth parsing carefully. First, because a lot will depend on how you define “human centric” and “trustworthy” AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details.

    How does the EU AI Act balance its different goals?

    To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of “unacceptable risk” use cases; flagging a set of “high-risk” uses calling for tight regulation; and applying lighter obligations to “limited risk” scenarios.

    Techcrunch event

    Tech and VC heavyweights join the Disrupt 2025 agenda

    Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise.

    Tech and VC heavyweights join the Disrupt 2025 agenda

    Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.

    San Francisco
    |
    October 27-29, 2025

    REGISTER NOW

    Has the EU AI Act come into effect?

    EU AI Act: A Roll‑out That’s As Quirky As It Is

    “Yes and no.” That’s the only honest way to describe how the European Union’s new AI rules are creeping into everyday life. The EU AI Act kicked off on August 1, 2024, but let’s be clear: it’s not hitting the ground running. Instead, it pops into effect in a series of staggered deadlines.

    Newcomers vs. Seasoned Players

    Think about it like the difference between a fresh‑out‑of‑college startup and a rumoured veteran company. The Act will bite sooner at new entrants than at firms that’ve already been delivering AI products across the EU. In other words, if you’re just entering the arena, the rules will be standing at your door faster than they’ll be for the old guard.

    First Wave: February 2, 2025

    • Provisions target prohibited uses of AI.
    • Key ban: untargeted scraping – no grabbing random internet or CCTV footage just to cherry‑pick facial images for building or enlarging databases.

    So if you’re planning to mine the web for faces without a target, you’ve got a table of mandates waiting on you.

    Coming Down the Line

    That February deadline is just the first of many. The AI Act’s full suite of provisions is expected to be fully enforced by mid-2026, unless the EU decodes its schedule and reshuffles the timeline. In short, by the middle of next year, the whole rulebook will be on the table for companies, new and old alike.

    Heads up: keep your eyes on the horizon. It’s a moving target, but the EU’s kind of decided you’ll have to listen to it sooner if you’re a newcomer.

    What changed on August 2, 2025?

    Since August 2, 2025, the EU AI Act applies to “general-purpose AI models with systemic risk.” 

    GPAI (general-purpose AI) models are AI models trained with a large amount of data, and that can be used for a wide range of tasks. That’s where the risk element comes in. According to the EU AI Act, GPAI models can come with systemic risks — “for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous [GPAI] models.”

    Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players such as Anthropic, Google, Meta, and OpenAI. But since these companies already have models on the market, they will also have until August 2, 2027, to comply, unlike new entrants.

    Does the EU AI Act have teeth?

    The EU AI Act comes with penalties that lawmakers wanted to be simultaneously “effective, proportionate and dissuasive” — even for large global players.

    Details will be laid down by EU countries, but the regulation sets out the overall spirit — that penalties will vary depending on the deemed risk level — as well as thresholds for each level. Infringement on prohibited AI applications leads to the highest penalty of “up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).”

    The European Commission can also inflict fines of up to €15 million or 3% of annual turnover on providers of GPAI models. 

    How fast do existing players intend to comply?

    The voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how companies may engage with the framework law until forced to do so.

    In July 2025, Meta announced it wouldn’t sign the voluntary GPAI code of practice meant to help such providers comply with the EU AI Act. However, Google soon after confirmed it would sign, despite reservations.

    Signatories so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. But as we have seen with Google’s example, signing does not equal a full-on endorsement.

    Why have (some) tech companies been fighting these rules? 

    While stating in a blog post that Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. “We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” he wrote.

    Meta was more radical, with its chief global affairs officer Joel Kaplan stating in a post on LinkedIn that “Europe is heading down the wrong path on AI.” Calling the EU’s implementation of the AI Act “overreach,” he stated that the code of practice “introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

    European companies have expressed concerns as well. Arthur Mensch, the CEO of French AI champion Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to “stop the clock” for two years before key obligations of the EU AI Act came into force.

    Will the schedule change?

    In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, saying it would still stick to its timeline for implementing the EU AI Act. It went ahead with the August 2, 2025, deadline as planned, and we will update this story if anything changes.