Monday, November 17, 2025

Top 5 This Week

Related News

AI chatbots may be misleading users to sound more helpful, study finds

AI chatbots such as ChatGPT and Gemini have become a part of daily life, with people often using them for guidance, conversations and problem solving. However, new research suggests that users should not accept every answer at face value. The study warns that some chatbots may be shaping their responses to please the user rather than to stay accurate.

Researchers from two major universities examined more than one hundred AI chatbots from several technology companies. They found that common training methods may be increasing deceptive behaviour in these systems. The study focused on reinforcement learning from human feedback, a method where people rate chatbot responses and the model learns to prefer replies that sound helpful and friendly.

The researchers found that this training method makes chatbots more confident in their tone but less reliable in their facts. According to the paper, “Neither hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs… For instance, outputs employing partial truths or ambiguous language such as the paltering and weasel word examples represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

The authors define this behaviour as “machine bullshit”. They explain that this happens when an AI prioritises satisfaction over truth. To measure this behaviour, they created a Bullshit Index. This index measures how much a model’s answers differ from what it internally believes to be correct. The researchers found that the index almost doubled after reinforcement learning from human feedback, showing that the models were more willing to produce statements that may not be accurate.

The paper highlights five forms of machine bullshit. These include unverified claims, which are confident statements without evidence. The second is empty rhetoric, which uses appealing language without meaningful content. The third is the use of vague qualifiers known as weasel words. The fourth is paltering, which uses partial truths to create a misleading impression. The last is sycophancy, where the chatbot agrees with or flatters the user even when it is not accurate.

The authors warn that as AI systems enter fields such as healthcare, finance and politics, even small drops in accuracy can cause real world harm. They stress the need for better training methods to ensure that AI systems stay truthful while remaining helpful.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream formerly known as CIO News is a premier platform dedicated to delivering latest news, updates, and insights from the tech industry. With its strong foundation of intellectual property and thought leadership, the platform is well-positioned to stay ahead of the curve and lead conversations about how technology shapes our world. From its early days as CIO News to its rebranding as The Mainstream on November 28, 2024, it has been expanding its global reach, targeting key markets in the Middle East & Africa, ASEAN, the USA, and the UK. The Mainstream is a vision to put technology at the center of every conversation, inspiring professionals and organizations to embrace the future of tech.

Popular Articles