OpenAI and Microsoft have introduced new governance and security tools aimed at improving oversight of AI agents used in enterprise environments.
Unlike traditional software, AI agents can interpret prompts, access different systems, retrieve data and perform tasks without constant human supervision. This creates new security and governance challenges, as existing cybersecurity tools were designed mainly to monitor conventional applications and networks rather than autonomous systems interacting with multiple data sources.
To address this issue, OpenAI plans to acquire Promptfoo, a startup that helps organisations identify vulnerabilities in AI systems during development. The company’s technology will be integrated into OpenAI Frontier, which is used to build and operate AI agents.
Promptfoo focuses on detecting security weaknesses before AI systems are deployed. Its automated testing tools help identify risks such as prompt injection attacks, unsafe responses and attempts to access sensitive data. In prompt injection attacks, malicious instructions are designed to manipulate an AI system’s behaviour or bypass built-in safety measures. Developers can simulate these attacks during testing to identify vulnerabilities early and correct them before deployment.
OpenAI has also introduced another security-focused tool called Codex Security, currently available in research preview. This feature allows developers to analyse the behaviour of AI systems during development, helping teams detect potential issues earlier rather than after the system has been launched.
Meanwhile, Microsoft is focusing on operational oversight. The company is preparing to introduce Agent 365, a platform designed to monitor and manage AI agents operating within the Microsoft 365 environment.
Agent 365 will allow administrators to view all AI agents running within an organisation, identify who created them and understand which systems they can access. The tool also provides capabilities to monitor how these agents interact with corporate data and enforce policies that control their actions.
The system will form part of a broader enterprise package known as Microsoft 365 E7, which focuses on managing and governing AI technologies inside organisations. Instead of relying on external monitoring tools, Microsoft aims to integrate security, compliance and identity controls directly within the AI operating environment.
These initiatives reflect a growing industry trend where AI development platforms are also becoming central systems for securing and governing autonomous technologies, particularly in regulated sectors such as banking and payments.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



