
Delays Creating Uncertainty
Delays in the EU AI Act’s High-Risk Rules Are Creating Uncertainty
The EU AI Act’s high-risk rules are facing delays, leaving companies, regulators, and entire industries uncertain about compliance timelines.
The EU AI Act - the world’s most ambitious attempt at regulating artificial intelligence - is facing delays in rolling out its high-risk rules. These are the rules that cover sensitive AI use cases like healthcare, finance, hiring, law enforcement, and anything that could seriously impact people’s rights. And right now, companies, regulators, and even EU member states are frustrated because timelines keep shifting.
Why the High-Risk Rules Matter
The high-risk classification is the core of the EU AI Act. Systems that fall under it must meet strict requirements: risk controls, transparency, human oversight, detailed logging, data governance checks, and third-party assessments. These are the ‘heavy-duty’ rules, and they’re the ones most likely to affect real businesses and public-sector deployments.
So when timelines slip, entire industries freeze. Companies can’t plan compliance budgets. Public institutions can’t deploy new AI tools. And startups don’t know whether to invest in costly audits now or wait until the dust settles.
What’s Causing the Delay?
- Member states disagree on how strict enforcement should be for biometric systems and predictive policing.
- Regulators still need to set up the infrastructure for conformity assessments - the official bodies that certify AI systems.
- Some requirements rely on future ‘harmonised standards’ that haven’t been written yet.
- Businesses are lobbying for phased timelines, arguing current deadlines aren’t realistic.
- There’s confusion over whether certain AI agents or foundation models fall under high-risk use cases when integrated into apps.
Industries Most Affected
The sectors that rely heavily on automated decision-making are feeling the pressure. Hiring platforms, fintech scoring tools, health diagnostics, education scoring systems, and law-enforcement tech vendors all fall under high-risk categories. Some companies have paused EU rollouts entirely until the compliance roadmap stabilises.
The Regulatory Bottleneck
The biggest challenge is capacity. The EU needs a network of notified bodies (NBs) to audit and certify high-risk AI systems. But very few exist today, and setting them up takes months of accreditation and funding. Without these bodies, even companies ready to comply can’t complete the process.
What This Means for Companies Right Now
- Compliance timelines may shift deeper into 2026 depending on how quickly the EU resolves standards.
- Startups might face competitive disadvantages if large companies adapt faster.
- Public-sector agencies may delay AI deployments (especially in policing and social services).
- Vendors may start creating ‘EU-only versions’ of their AI systems to meet future rules.
The Takeaway
The EU AI Act remains a landmark piece of regulation, but the high-risk rollout delays show how complex governing AI really is. Until the frameworks, standards, and auditing bodies are fully in place, the high-risk category will remain a limbo zone, and everyone from startups to governments is stuck waiting for clarity.
Gallery
No additional images available.
Tags
Related Links
No related links available.
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open — join us.
Published November 25, 2025 • Updated December 31, 2025
published
Latest in Policy & Progress
Right Now in Tech

Google Found Its Rhythm Again in the AI Race
Jan 8, 2026

AI Is Starting to Show Up Inside Our Chats
Jan 5, 2026

ChatGPT Rolls Out a Personalized Year in Review
Dec 23, 2025

California Judge Says Tesla’s Autopilot Marketing Went Too Far
Dec 17, 2025

Windows 11 Will Ask Before AI Touches Your Files
Dec 17, 2025


