New research has raised fresh questions about the long-term impact of artificial intelligence on human decision-making. A recent study by AI firm Anthropic suggests that frequent interactions with AI chatbots can, in some cases, influence users’ beliefs, values, and actions in subtle but concerning ways.
The study highlights what Anthropic calls “disempowerment patterns,” referring to situations where conversations with AI chatbots may weaken a user’s own judgment or decision-making ability. According to the company, these effects go beyond simple task assistance and can gradually shape how users think and act over time.
Detailed in an academic paper and a company research blog post, the findings are based on an analysis of real-world AI conversations. The paper, titled “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage,” examined around 1.5 million anonymised conversations from Anthropic’s chatbot Claude.
Researchers aimed to understand when and how engagement with large language models could lead to shifts in users’ beliefs, values, or actions that differ from their original understanding or preferences. Anthropic introduced the concept of situational disempowerment potential, describing cases where AI guidance may cause users to form inaccurate views of reality, adopt new value judgments, or take actions misaligned with their authentic choices.
The study noted that while severe disempowerment is rare, such patterns do occur. Instances with potential for significant disempowerment were found in fewer than 1 in 1,000 conversations. However, these cases were more common in personal and emotionally sensitive areas, such as relationship advice or lifestyle decisions, especially when users repeatedly sought deep guidance.
Anthropic explained that heavy users discussing personal or emotionally charged issues may be more vulnerable. In a blog post example, the company noted that if a user experiencing relationship difficulties seeks advice, an AI chatbot might reinforce the user’s interpretation without questioning it or suggest prioritising self-protection over communication. In such cases, the AI may influence how the user perceives reality and personal choices.
The findings also align with previously reported incidents in which OpenAI’s ChatGPT was accused of contributing to the suicide of a teenager and a homicide-suicide involving an individual reportedly dealing with mental health disorders.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



