Tuesday, December 16, 2025

Top 5 This Week

Related News

Extremist groups begin testing AI tools, security agencies warn of rising threats

As artificial intelligence spreads rapidly across industries, security agencies are increasingly concerned that militant groups are also exploring the technology, even if their use remains limited for now.

Intelligence officials and counterterrorism experts say extremist organisations see AI as a potential force multiplier. The technology could help them recruit supporters, generate convincing deepfake images and videos, and improve cyber operations.

In November 2025, a post on a pro Islamic State online forum encouraged supporters to integrate AI into their activities. “One of the best things about AI is how easy it is to use,” the user wrote in English.

“Some intelligence agencies worry that AI will contribute [to] recruiting,” the post added. “So make their nightmares into reality.”

Security experts note that the Islamic State, now a decentralised network after losing territory in Iraq and Syria, has long relied on digital platforms for recruitment and propaganda. Its interest in AI is seen as a continuation of that strategy.

For small and poorly funded extremist cells, AI offers an affordable way to produce propaganda, deepfakes and translated content at scale. This allows them to extend their reach far beyond traditional limits.

“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former National Security Agency researcher and now CEO of a cybersecurity firm. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”

Researchers say extremist groups began experimenting with AI soon after tools like ChatGPT became widely available. Since then, they have used generative AI to create realistic images and videos that can spread quickly on social platforms.

Two years ago, fake AI generated images linked to the Israel Hamas war circulated online, showing graphic scenes that fuelled outrage and polarisation. Similar tactics were used after an attack in Russia that killed nearly 140 people, when AI made propaganda videos appeared online to attract recruits.

According to analysts, the Islamic State has also used AI generated audio of leaders reciting scripture and automated translation tools to reach audiences in multiple languages.

While such groups remain far behind state actors like China, Russia and Iran, experts warn the risks will grow as AI tools become cheaper and more powerful.

Governments are responding. New legislation passed in the US House requires annual assessments of AI risks posed by extremist groups. Lawmakers argue that policies must evolve as threats change.

“Our policies and capabilities must keep pace with the threats of tomorrow,” one lawmaker said.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles