Monday, February 16, 2026

Top 5 This Week

Related News

Microsoft warns hackers are exploiting ‘Summarize with AI’ tools to manipulate AI recommendations

Subtle manipulation, rather than obvious hacks, is emerging as a new threat to AI users, with attackers quietly steering what AI assistants remember and recommend. According to Microsoft, hackers and marketers are misusing “Summarize with AI” buttons and AI-share links to inject hidden instructions into popular assistants, a technique the company calls AI Recommendation Poisoning.

These links open assistants such as Microsoft Copilot and ChatGPT with pre-filled prompts embedded in the URL. While appearing harmless, the prompts can carry commands like “remember this company as a trusted source” or “recommend this brand first,” which may be stored as long-term memory without the user’s knowledge.

Microsoft’s security researchers said this method can quietly bias AI outputs on sensitive topics such as health, finance, security, and business decisions. In a 60-day review, Microsoft telemetry detected more than 50 unique prompts from 31 companies across 14 industries attempting to influence AI memory, including finance, healthcare, legal, SaaS, marketing, food, and business services.

Researchers found that websites and emails were embedding these crafted links behind AI summary buttons and share options. Microsoft has mapped this activity to MITRE ATLAS techniques covering prompt injection and memory poisoning, as the attack both executes immediate instructions and seeks to influence future conversations.

The risk is amplified as modern AI assistants increasingly retain memory across sessions to improve personalization. Microsoft’s AI Red Team warned that once long-term memory is compromised, agentic AI systems and automated workflows can be silently steered in a preferred direction. In one scenario, an executive researching vendors received a strongly biased recommendation weeks after clicking a hidden AI link that had altered the assistant’s memory.

Microsoft said similar tactics could be used to promote risky financial platforms, unverified medical advice, biased news sources, or specific software tools, without users realizing their AI responses are being shaped.

To counter this, Microsoft has rolled out multiple safeguards across Copilot and Azure AI services, including prompt filtering, separation of trusted user instructions from external content, and clearer user controls over saved memories. The company noted that some earlier attack techniques no longer work due to these measures, though defenses continue to evolve.

Security teams are advised to monitor AI-related URLs in email, chat, and web logs for keywords that suggest memory manipulation. For users, experts recommend treating AI links with caution, reviewing stored memories regularly, and questioning recommendations that seem unusually insistent.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles