Despite growing concerns around AI misuse, new research shows that cybercriminals are still struggling to effectively use artificial intelligence in their operations.
A research team from the University of Edinburgh, University of Strathclyde, and University of Cambridge analysed over 100 million posts from the CrimeBB database, which collects data from dark web and underground cybercrime forums. Using machine learning and manual analysis, they examined discussions from November 2022 onwards, around the time ChatGPT was launched.
The study found that while cybercriminals are experimenting with AI, most lack the skills and resources needed to use these tools effectively. AI has not significantly lowered the barrier to entry for cybercrime. Instead, AI coding tools are proving more useful for individuals who already have advanced technical skills.
“Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work,” said Dr Ben Collier.
The research highlights that AI is being used more successfully in specific areas. These include running social media bots for misogynistic harassment, conducting fraud, and masking patterns that could otherwise be detected by cybersecurity systems.
Importantly, the study notes that safety guardrails built into major AI chatbots are helping reduce misuse and potential harm.
However, researchers warn that the bigger risk lies elsewhere. “The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill,” said Dr Ben Collier.
“Our message to industry is: don’t panic yet.”
The report also found that many individuals in cybercrime communities fear losing their IT-related jobs due to AI’s impact on the software industry. This concern could potentially push some towards increased cybercriminal activity.
Researchers further warned about risks linked to poorly secured agentic AI systems, which can act autonomously and make decisions. Concerns were also raised around insecure “vibecoded” products, where software is developed using AI tools without proper safeguards.
The findings have been peer-reviewed and are set to be presented at the Workshop on the Economics of Information Security in Berkeley, US, in June.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.





