EU AI Act faces new challenges with rise of autonomous agentic AI systems

0
3
Growing autonomy in AI systems puts pressure on EU AI Act governance framework
Growing autonomy in AI systems puts pressure on EU AI Act governance framework

As artificial intelligence continues to advance, the emergence of more autonomous systems is raising fresh concerns about regulation under the EU AI Act. With key provisions set to come into effect on August 2, 2026, experts are highlighting governance gaps linked to “agentic AI” — systems capable of making decisions and executing tasks independently.

Unlike conventional AI tools, agentic AI can operate across platforms, perform multi-step actions, and function with limited human involvement. This evolution is exposing limitations in current regulatory frameworks, which were primarily designed for AI systems that function under direct human control.

The EU AI Act, known as the first comprehensive AI regulation globally, follows a risk-based classification system. It categorises AI applications from minimal to unacceptable risk, with stricter compliance requirements for high-risk use cases. However, experts believe agentic AI may not fit clearly within these defined categories.

Accountability remains a major concern. When AI systems act on their own, it becomes difficult to assign responsibility for decisions, errors, or potential harm. Issues around liability, oversight, and human intervention are still unclear, especially in complex environments involving multiple systems.

Transparency is another challenge. Agentic AI often operates through dynamic and evolving workflows, making it harder to trace decisions, monitor data usage, and understand system behaviour over time. This creates barriers in meeting compliance standards such as documentation, auditability, and explainability.

Security risks are also increasing. Autonomous systems may be misused by malicious actors or behave unpredictably, highlighting the need for stronger safeguards. Continuous monitoring becomes essential, as AI systems can change their behaviour even after deployment.

The EU AI Act does include governance measures, such as national regulatory bodies and an EU-level AI Office to oversee compliance. However, experts point out that implementation may be difficult, particularly as technology evolves faster than regulatory frameworks.

With the 2026 deadline approaching, organisations deploying advanced AI systems will need to revisit their governance models. The focus is shifting from building AI capabilities to ensuring safe, transparent, and legally compliant operations.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.