Monday, January 19, 2026

Top 5 This Week

Related News

AI health tools debut amid safety concerns after chatbot-linked injury in India

As artificial intelligence companies move rapidly into healthcare, new tools from leading AI developers are promising to reshape how doctors and patients access medical information. At the same time, a recent case in India has reignited concerns about the risks of relying on chatbots for medical advice.

OpenAI introduced ChatGPT Health on January 7, creating a dedicated space within its chatbot for health-related conversations. The feature offers enhanced privacy protections and allows users to connect medical records and wellness applications. A day later, the company announced ChatGPT for Healthcare, an enterprise-focused offering that is already being used by institutions such as Boston Children’s Hospital, Memorial Sloan Kettering Cancer Center, and Cedars-Sinai Medical Center.

Soon after, Anthropic unveiled Claude for Healthcare at the J.P. Morgan Healthcare Conference. The platform includes HIPAA-compliant tools and can connect with clinical reference systems such as ICD-10 codes and the Centers for Medicare and Medicaid Services coverage registry.

However, the rapid rollout has drawn scrutiny following a medical incident in India. Dr. Uma Kumar, head of the Rheumatology Department at All India Institute of Medical Sciences in New Delhi, issued a warning after treating a patient who relied on ChatGPT to manage persistent back pain.

According to Dr. Kumar, the chatbot suggested commonly used painkillers. The patient purchased non-steroidal anti-inflammatory drugs and consumed them without consulting a doctor, which led to severe internal bleeding.

“All ailments are diagnosed by exclusion, and we advise medicines according to the investigation,” Dr. Kumar told reporters. “Do not use AI for self-diagnosis or self-treatment”.

The incident has highlighted the growing tension between AI’s ability to broaden access to health information and the dangers of treating chatbots as substitutes for medical professionals. Experts caution that AI systems can produce “hallucinations,” delivering confident but incorrect responses. In healthcare settings, such errors can be especially risky because chatbots cannot fully assess medical history, ongoing conditions, or physical symptoms.

While developers stress that these tools are designed to support, not replace, clinicians, the case has renewed calls for clearer safeguards and stronger public awareness as AI becomes more deeply embedded in healthcare decision-making.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles