Saturday, January 24, 2026

Top 5 This Week

Related News

India proposes techno-legal framework to guide responsible AI growth

In a move aimed at shaping trusted artificial intelligence development, India has outlined a new approach to AI governance that seeks to balance innovation with risk management. The Office of the Principal Scientific Adviser has released a white paper proposing a “techno-legal” framework that combines legal safeguards, technical controls, and institutional oversight to support responsible AI adoption.

Titled Strengthening AI Governance Through Techno-Legal Framework, the white paper presents a structured mechanism to operationalise India’s AI governance ecosystem. It underlines that the effectiveness of any policy depends on strong and consistent implementation. The framework is designed to strengthen collaboration across industry, academia, government bodies, AI developers, deployers, and users.

At the centre of the proposal is the formation of the AI Governance Group (AIGG), to be chaired by the Principal Scientific Adviser. The group will work with ministries, regulators, and policy advisory bodies to address “the current fragmentation in governance and operational processes”. Its role includes setting uniform standards for responsible AI, identifying regulatory gaps, and recommending legal amendments. The AIGG will also focus on “promoting responsible AI innovation and the beneficial deployment of AI in key sectors”.

To support this effort, a Technology and Policy Expert Committee (TPEC) will be set up within the Ministry of Electronics and Information Technology. This committee will bring together experts from Law, Public Policy, Machine Learning, AI safety, and cybersecurity. As outlined in the paper, the TPEC will advise the AIGG on issues of national importance, including global AI policy trends and emerging AI capabilities.

The framework also proposes the creation of an AI Safety Institute (AISI). The institute will act as the main centre for “evaluating, testing, and ensuring the safety of AI systems deployed across sectors”. It will support the IndiaAI mission by developing tools to address content authentication, bias, and cybersecurity. The AISI will publish risk reports and compliance reviews, and collaborate with global safety institutes and standards bodies.

To track risks after deployment, a national AI Incident Database will be established. It will record and analyse safety failures, biased outcomes, and security breaches. While inspired by global models such as the OECD AI Incident Monitor, the system will remain “adapted to fit India’s sectoral realities and governance structures”. Reports will be submitted by public bodies, private entities, researchers, and civil society groups.

The white paper also encourages voluntary industry commitments and self-regulation. Practices such as transparency reporting and red-teaming are highlighted as critical. The government plans to offer financial, technical, and regulatory incentives to organisations leading in responsible AI, with a focus on “consistency, continuous learning and innovation”.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles