Artificial intelligence, widely seen as a driver of productivity and innovation, could soon turn into the world’s most serious cybersecurity threat, according to a new warning from IBM.
In its latest cybersecurity outlook, IBM cautioned that AI will not only strengthen external attackers but also create major internal risks. These include poorly governed deployments, autonomous agents operating without oversight, and widespread misuse of AI tools by employees. The report states that almost every major cyber risk on the horizon now has AI at its core.
IBM warned that traditional security systems are not built to handle autonomous AI agents that operate at machine speed. Such agents can make decisions without human approval, generate new sub agents, and move across systems independently. This, the report said, forces organisations to rethink cybersecurity from the ground up. “Security must shift from periodic checks to continuous validation and monitoring of AI behaviour,” the report stated. It added that governance must be built into AI systems from the design stage itself.
Another immediate risk highlighted is Shadow AI, where employees use unapproved AI tools for work. IBM warned this could lead to confidential research data being uploaded to external models, leakage of intellectual property, and loss of regulatory control over sensitive information. The company advised organisations to offer approved and governed AI platforms to balance innovation with security.
The report also flagged growing threats from deepfake audio, video, and voice cloning. According to IBM, facial recognition and voice authentication systems can now be easily fooled, making identity checks based on appearance or voice unreliable. It said digital identity systems should be treated as critical infrastructure, protected with layered verification and AI specific defences.
IBM further warned that autonomous AI agents can expose sensitive data faster than ever. In complex environments, it can become unclear which agent accessed specific data, and traditional audit trails may fail. The company stressed the need for agent level traceability so every AI action can be tracked, reviewed, and reversed if required.
A major concern raised was accountability. The report noted that existing compliance frameworks struggle to answer why an AI took a specific action, who authorised it, and whether it was within policy. Without clear accountability models, organisations risk legal, financial, and reputational damage.
IBM also highlighted quantum computing as a growing cyber risk, warning that current encryption standards may become obsolete. It urged organisations to adopt crypto agility, adding, “Quantum-safe encryption is not optional — it is inevitable.”
The report concluded that humans are now the weakest link, with cybercriminals exploiting helpdesks, password resets, and account recovery processes using AI powered impersonation.
IBM warned that without strong governance, traceability, and accountability, AI driven cybercrime is not a future threat but a present reality.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



