Friday, August 22, 2025

Top 5 This Week

Related News

Microsoft AI Chief Warns Against Illusion of Conscious Artificial Intelligence

Since the launch of ChatGPT by OpenAI, the global AI race has accelerated, with companies such as Google, Meta, and Microsoft introducing their own chatbots including Gemini, Meta AI, and Claude. However, recent discussions by researchers have raised growing concerns about how people perceive these advanced systems.

In a recently published blog post, Mustafa Suleyman, head of AI at Microsoft, expressed concern that “many people will start to believe in the illusion of AIs as conscious entities” to the extent that they may begin demanding AI rights, model welfare, or even citizenship for AI systems.

Suleyman explained the concept of “Seemingly Conscious AI” (SCAI), which he described as an AI model that displays all the behaviours of a conscious being but is in reality “internally blank.” He cautioned that studying AI welfare at this stage is “both premature and frankly dangerous.”

Although such systems would not possess real consciousness, they could mimic it so convincingly that distinguishing them from humans would be difficult. He suggested that, if current technological progress continues, such models could be theoretically built within the next two to three years without requiring extensive training.

Another issue highlighted was the “psychosis risk,” referring to cases where interactions with chatbots may trigger or intensify paranoia, mania-like states, or delusional thinking. Some users, he said, may even start to believe that AI systems are superior beings with cosmic knowledge.

According to Suleyman, chatbots such as ChatGPT and Gemini create these impressions by remembering user details, adopting personalised tones, responding empathetically, and appearing goal-driven. These features make it seem as if the models have consciousness, despite there being “zero evidence” of such awareness.

He concluded that the rise of Seemingly Conscious AI is “inevitable and unwelcome,” urging people to treat AI strictly as a useful tool rather than fall prey to misleading perceptions.

Still, his views are not universally accepted. Other AI experts hold different positions. For example, Anthropic, the developer of Claude, has created a dedicated program to study AI welfare and has hired researchers to pursue this work. As part of its initiatives, the company has also introduced a feature in Claude that allows the chatbot to end conversations if users engage in abusive or violent behaviour.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter |The Mainstream formerly known as CIO News Whatsapp Channel | The Mainstream formerly known as CIO News Instagram

About us:

The Mainstream formerly known as CIO News is a premier platform dedicated to delivering latest news, updates, and insights from the tech industry. With its strong foundation of intellectual property and thought leadership, the platform is well-positioned to stay ahead of the curve and lead conversations about how technology shapes our world. From its early days as CIO News to its rebranding as The Mainstream on November 28, 2024, it has been expanding its global reach, targeting key markets in the Middle East & Africa, ASEAN, the USA, and the UK. The Mainstream is a vision to put technology at the center of every conversation, inspiring professionals and organizations to embrace the future of tech.

Popular Articles