Thursday, February 12, 2026

Top 5 This Week

Related News

India clarifies exemptions for good-faith AI use under new synthetic content rules

As India tightens its regulations on artificial intelligence-generated content, the government has clarified that good-faith and educational use of AI will not require synthetic content labelling.

The clarification came through a set of FAQs issued by the Ministry of Electronics and IT (MeitY), a day after stricter rules were announced for social media platforms such as YouTube and X. The amended framework mandates removal of unlawful content within 3 hours and requires clear labelling of AI-generated or synthetic material.

Explaining what qualifies as Synthetic Generated Content (SGI), the ministry said, “Not every AI-assisted creation or editing qualifies as SGI. Content is treated as SGI only when it is artificially or algorithmically created or altered in a way that it appears real or authentic or true and is likely to be indistinguishable from a real person or real-world event.”

Routine or good-faith actions such as editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription or compression will not be treated as SGI, as long as they do not distort or misrepresent the original meaning.

The government clarified that AI use in education and training materials, presentations, reducing file size, publishing notices, removing background noise, stabilising videos, correcting colour balance, and preparing documents or research outputs will remain exempt. Similarly, adding subtitles, translating speeches without altering content, generating summaries or tags for search optimisation, and improving accessibility for visually impaired users will not require labelling, provided the core content is not manipulated.

However, the ministry warned that AI-generated fake certificates, forged IDs, fake official letters or fabricated electronic records will not fall under these exclusions and may be treated as unlawful SGI.

The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, will come into force on February 20, 2026. The changes follow rising concerns over AI-generated deepfakes, non-consensual intimate imagery and misleading impersonation videos spreading online.

Under the revised rules, platforms must ensure faster takedowns, mandatory labelling of AI-generated content, embedding of permanent metadata or identifiers, and quicker grievance redressal timelines. The framework places responsibility on both social media platforms and AI tools to prevent the spread of unlawful synthetic material.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles