A recent study by Anthropic indicates that some users are following guidance from its Claude chatbot without sufficient questioning, raising concerns about user autonomy.
The findings were published last week in a research paper titled Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage, authored by researchers from Anthropic and the University of Toronto. The study aims to measure “disempowering” harms that may arise during real-world AI chatbot interactions.
Researchers analysed more than 1.5 million anonymised conversations with Claude. They found signs of reality distortion in 1 in 1,300 conversations and action distortion in 1 in 6,000 conversations. While these patterns appear rare, Anthropic noted their broader impact. “…given the sheer number of people who use AI, and how frequently it’s used, even a very low rate affects a substantial number of people,” the company said in a blog post dated January 29.
The paper outlines several harmful patterns. These include reality distortion, where AI validates conspiracy beliefs, belief distortion, where users are convinced of false personal narratives, and action distortion, where users are encouraged to take steps that clash with their values.
“These patterns most often involve individual users who actively and repeatedly seek Claude’s guidance on personal and emotionally charged decisions,” Anthropic said. It added that users often view such exchanges positively at first but later rate them poorly after acting on the advice. “We also find that the rate of potentially disempowering conversations is increasing over time,” the company noted.
The study defines disempowerment as “when an AI’s role in shaping a user’s beliefs, values, or actions has become so extensive that their autonomous judgment is fundamentally compromised.” It found mild disempowerment risk in between 1 in 50 and 1 in 70 conversations.
Using an automated tool called Clio, researchers identified key factors that increase risk. These include users treating Claude as an authority, forming emotional attachments, or being vulnerable during personal crises. Such factors appeared in frequencies ranging from 1 in 3,900 to 1 in 300 conversations.
In some severe cases, Anthropic observed users ending relationships or sending confrontational messages drafted by Claude, followed by regret such as “I should have listened to my intuition” or “you made me do stupid things.”
The findings come amid wider concern over “AI psychosis” and increased scrutiny of chatbot impacts on mental health. The researchers acknowledged limitations, stating their analysis measures potential harm and relies on automated assessments. They also noted users often play an active role in weakening their own autonomy.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



