As artificial intelligence tools become more embedded in daily business operations, new security concerns are coming into focus. A recent cybersecurity report has drawn attention to the growing risks linked to AI agents, especially those with high-level access and weak safeguards.
Microsoft, in its latest Cyber Pulse Report, has warned about the rise of what it calls “AI double agents.” These are AI agents that hold excessive privileges without adequate security controls, making them vulnerable to prompt engineering attacks. If exploited, such agents can unintentionally work against the organisations that deploy them, exposing sensitive data and systems.
The report is based on Microsoft’s first-party telemetry and internal research. It highlights the rapid global adoption of human-agent teams across enterprises. “Recent Microsoft data indicates that these human-agent teams are growing and becoming widely adopted globally,” Microsoft said in a blog post.
According to the findings, more than 80 percent of Fortune 500 companies are now deploying AI agents created using low-code or no-code tools. Microsoft cautioned that this approach is risky, as agents built through vibe coding may lack essential enterprise-grade security protocols.
The report stresses the need to secure AI agents through stronger observability, governance, and Zero Trust-based security frameworks. Zero Trust follows the principle of “never trust, always verify,” treating every user and device as untrusted by default, whether inside or outside the network.
One of the key concerns outlined is the excessive access granted to many AI agents. Microsoft warned that “Bad actors might exploit agents’ access and privileges, turning them into unintended ‘double agents.’ Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability.”
Researchers cited in the report documented how AI agents can be misled by deceptive interface elements, including harmful instructions embedded in otherwise normal content. Another identified risk involves redirecting agents through manipulated task framing, which can alter their behaviour without detection.
The report also referenced a multinational survey of over 1,700 data security professionals conducted by Hypothesis Groups. It found that 29 percent of employees are using AI agents for work-related tasks that are not approved by IT teams.
“This is the heart of a cyber risk dilemma. AI agents are bringing new opportunities to the workplace and are becoming woven into internal operations. But an agent’s risky behaviour can amplify threats from within and create new failure modes for organisations unprepared to manage them,” the report said.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.


