AI yet to significantly boost cybercrime activities, study finds

0
4
Study highlights limited impact of AI on cybercriminal capabilities so far
Study highlights limited impact of AI on cybercriminal capabilities so far

A recent study analysing around 100 million posts from underground and dark web forums suggests that cybercriminals are finding it difficult to effectively use artificial intelligence in their operations.

The research, conducted by teams from the universities of Edinburgh, Strathclyde and Cambridge, examined discussions from the CrimeBB database, which contains over 100 million posts from cybercrime communities. Using a mix of machine learning tools and manual analysis, researchers tracked how cybercriminals began experimenting with AI technologies from November 2022, when ChatGPT was released.

The findings show that most cybercriminals lack the skills and resources needed to use AI effectively. Instead of lowering the barrier to entry, AI tools such as coding assistants are mainly helping individuals who already have strong technical expertise.

The study found that AI is currently being used most effectively for running social media bots that carry out harassment and fraud, and for hiding patterns that are usually detected by cybersecurity systems.

Researchers also highlighted that safety guardrails in major AI chatbots are playing a key role in limiting harmful use. Dr Ben Collier from the University of Edinburgh said, “Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work.”

He added, “Our message to industry is: don’t panic yet. The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill.”

The study also noted that many individuals in cybercrime communities are worried about losing their IT jobs due to AI, which could push some towards illegal activities.

Researchers warned that the bigger risks lie in poorly secured agentic AI systems that can act independently, as well as insecure “vibecoded” software created using AI by legitimate organisations.

The findings have been peer reviewed and will be presented at the Workshop on the Economics of Information Security in Berkeley, US, in June.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.