Positioning safe and reliable artificial intelligence as a basic right for young users, OpenAI has unveiled a dedicated safety framework for Indian teenagers, calling it essential for the first generation growing up in the “Intelligence Age.”
The company said that when it comes to minors, safety will take priority over privacy and freedom. As a result, ChatGPT’s interactions with a 15-year-old will be fundamentally different from those with an adult user.
The announcement comes days ahead of the India AI Impact Summit 2026, scheduled in the national capital from February 16–20. The event is expected to see participation from several global technology leaders, including OpenAI CEO Sam Altman.
Titled the “Teen Safety Blueprint for India,” the framework underlines the need for a country-specific approach due to India’s unique digital environment. OpenAI cited the RATI Foundation’s Ideal Internet Report 2024–25, which found that 62% of Indian teenagers use shared devices. This makes many traditional safety tools, built around personal devices, less effective.
To address this, OpenAI is shifting towards built-in, age-appropriate safeguards that align with the collective role of families, schools, and communities in shaping a teen’s digital experience.
As part of the rollout, OpenAI is introducing parental controls that allow parents to link their accounts with their teen’s ChatGPT profile through an email invitation. Once linked, parents can manage privacy and data settings, turn off memory and chat history, and set “blackout hours” to limit screen time. A key safety feature will also notify parents if a teen’s activity signals possible self-harm intent.
The company plans to identify younger users using privacy-protective, risk-based age estimation tools to distinguish adults from users under 18.
“These tools should minimise the collection of sensitive personal data, while still effectively distinguishing users under the age of 18 (U18). Where possible, these methods might also rely on operating systems or app stores to determine a user’s age,” OpenAI said.
“Age estimation will help AI companies ensure that they are applying the right protections to the right users… When there is not enough information to predict a user’s age, we will default to protective safeguards,” it added.
Under the updated safety policies, graphic or immersive violent content will be restricted for users under 18, including content linked to self-harm or dangerous stunts.
OpenAI also said it will work closely with Indian teachers, researchers, and policymakers to make AI literacy a core future skill. The company is proposing advisory councils with external experts in mental health, well-being, and child development.
“Going forward, we aim to ensure that all teens using AI receive age-appropriate protections by default,” OpenAI noted.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



