February 17, 2026
India's Seven Sutras for AI Governance: What They Actually Mean for Enterprises
As the India AI Impact Summit 2026 kicks off at Bharat Mandapam this week, one document deserves more attention than it’s getting: the India AI Governance Guidelines, released by MeitY on November 5, 2025.
This isn’t another whitepaper full of aspirational language. It’s the first concrete signal of how India intends to govern AI - and it’s built on a framework that’s already been tested in the financial sector.
The approach is anchored in seven guiding principles - called “sutras” - that together form the most coherent AI governance framework to emerge from any major economy. Not because they’re the most restrictive (they’re not), but because they do something the EU AI Act and the US executive orders haven’t managed: they balance innovation with accountability in a way that can actually be implemented.
Let me break down what these sutras mean, where they came from, and what enterprises operating in India need to do about them.
The Origin Story: From RBI to National Framework
The seven sutras didn’t appear from nowhere. They trace directly back to the RBI’s FREE-AI Committee - the Framework for Responsible and Ethical Enablement of Artificial Intelligence - which published its report on August 13, 2025.
That committee, set up in December 2024, surveyed banks, NBFCs, fintechs, and global regulators before producing a framework built around 7 Sutras, 2 Sub-frameworks, and 6 Pillars that together shaped 26 recommendations. The six strategic pillars - Infrastructure, Policy, Capacity, Governance, Protection, and Assurance - gave the financial sector a concrete governance blueprint.
MeitY took those same seven principles, adapted them for cross-sector application, and aligned them with national priorities. The result is a framework that’s been road-tested in one of India’s most regulated sectors before being applied nationally.
This is exactly the “extend-don’t-replace” approach we’ve advocated for. Rather than inventing governance from scratch, India built on institutional knowledge from a regulator that’s been managing algorithmic risk for over a decade.
flowchart TB
subgraph Origin["ORIGIN (Dec 2024 - Aug 2025)"]
RBI["RBI FREE-AI Committee<br/>Dec 2024"]
Survey["Survey: Banks, NBFCs,<br/>Fintechs, Global Regulators"]
FREEAI["FREE-AI Report<br/>Aug 13, 2025"]
end
subgraph National["NATIONAL ADAPTATION (2025)"]
Draft["Draft Report<br/>Jan 6, 2025"]
Consult["Public Consultation<br/>2,500+ Submissions"]
Committee["Drafting Committee<br/>Prof. Balaraman Ravindran"]
Final["Final Guidelines<br/>Nov 5, 2025"]
end
subgraph Summit["GLOBAL STAGE (Feb 2026)"]
Summit2026["India AI Impact Summit<br/>Feb 16-20, Bharat Mandapam"]
Global["30+ Nations in Talks<br/>on AI Regulation"]
end
RBI --> Survey --> FREEAI
FREEAI --> Draft --> Consult --> Committee --> Final
Final --> Summit2026 --> Global
style Origin fill:#1e293b,stroke:#475569,color:#e2e8f0
style National fill:#1e3a5f,stroke:#3b82f6,color:#e2e8f0
style Summit fill:#064e3b,stroke:#10b981,color:#e2e8f0
The Seven Sutras, Decoded
Let me go through each sutra and explain what it actually requires - not in aspirational terms, but in practical implementation terms.
1. Trust
This is the meta-principle. AI systems must earn and maintain the trust of users, affected populations, and regulators. Trust isn’t a checkbox - it’s an outcome of getting the other six sutras right.
What this means in practice: You need to be able to demonstrate - not just claim - that your AI systems are reliable, safe, and operating as intended. This requires continuous monitoring, not one-time certification.
2. People First
AI governance must centre human welfare. Technology serves humanity, not the other way around. This sutra explicitly positions AI as a tool for societal benefit, with human oversight as a non-negotiable requirement.
What this means in practice: Every AI system needs a human-in-the-loop or human-on-the-loop mechanism, especially for high-stakes decisions. If your lending model denies a loan, there must be a human who can review and override that decision. If your diagnostic AI flags a condition, a qualified professional must validate it.
3. Innovation Over Restraint
This is where India’s approach diverges most sharply from the EU. The guidelines explicitly prioritise enabling innovation over imposing restrictions. IT Secretary S. Krishnan explained at the launch: “India has consciously chosen not to lead with regulation but to encourage innovation while studying global approaches. Wherever possible, we will rely on existing laws and frameworks rather than rush into new legislation.”
What this means in practice: India is not going to mandate pre-deployment AI audits or create an approval bureaucracy for AI models. Instead, it’s using a principles-based approach with graded accountability. Low-risk AI systems face minimal requirements; high-risk ones face more scrutiny. But innovation is the default, not permission.
4. Fairness and Equity
AI systems must not discriminate unfairly. This goes beyond Western bias-testing frameworks - it must account for India’s specific diversity dimensions: caste, language, region, religion, economic status, and digital literacy.
What this means in practice: Standard fairness benchmarks designed for Western populations won’t cut it. You need to test for bias across Indian-specific protected characteristics. A lending model that’s fair across gender and race but discriminatory across caste or region isn’t compliant with this sutra.
India-Specific Fairness Dimensions
Western Bias Testing
- Gender
- Race / Ethnicity
- Age
- Disability
4 primary dimensions
Indian Fairness Testing
- Gender
- Caste / Tribe
- Religion
- Language / Script
- Region (State / District)
- Economic Status (BPL / APL)
- Urban vs Rural
- Digital Literacy Level
8+ intersecting dimensions
5. Accountability
The guidelines implement a graded accountability and liability regime across the AI value chain. This is critical: accountability is distributed among developers, deployers, and end-users based on their role and the degree of control they exercise.
What this means in practice: If you’re deploying a third-party AI model, you’re not absolved of responsibility just because you didn’t build it. You’re accountable for how you deploy it, what data you feed it, and what decisions you make based on its outputs. The guidelines explicitly recognise the AI value chain has many actors and apportions accountability accordingly.
6. Understandable by Design
AI systems must be interpretable to the extent necessary for their risk level. This isn’t blanket explainability - it’s proportionate to the stakes involved.
What this means in practice: A content recommendation algorithm doesn’t need the same level of explainability as a criminal risk assessment tool. But for high-stakes applications - lending, healthcare, law enforcement - you need to be able to explain individual decisions in terms that affected parties can understand. “The model said so” isn’t sufficient.
7. Safety, Resilience, and Sustainability
AI systems must be secure, robust against adversarial attacks, and environmentally sustainable. This sutra covers everything from cybersecurity to energy consumption.
What this means in practice: You need adversarial testing, not just accuracy benchmarks. You need fallback mechanisms when models fail. And increasingly, you need to account for the environmental cost of your AI infrastructure - something Indian regulators are starting to pay attention to.
The Risk Taxonomy: Four Tiers
The guidelines define a four-tier risk classification system that’s deliberately calibrated for India’s context.
flowchart LR
subgraph Tiers["AI RISK CLASSIFICATION"]
direction TB
P["PROHIBITED<br/>Social scoring,<br/>emotion inference<br/>in employment"]
H["HIGH RISK<br/>Safety, rights, livelihoods.<br/>Red-teaming, impact<br/>assessments, human oversight"]
M["MEDIUM RISK<br/>Moderate safeguards,<br/>documentation,<br/>monitoring"]
L["LOW RISK<br/>Regulatory-lite,<br/>voluntary codes,<br/>self-certification"]
end
P ~~~ H ~~~ M ~~~ L
style P fill:#7f1d1d,stroke:#ef4444,color:#fecaca
style H fill:#78350f,stroke:#f59e0b,color:#fef3c7
style M fill:#1e3a5f,stroke:#3b82f6,color:#bfdbfe
style L fill:#064e3b,stroke:#10b981,color:#a7f3d0
What makes this distinct from the EU’s four-tier system is the India-specific risk focus areas: national security considerations, deepfakes targeting women, child safety, language bias in multilingual contexts, caste discrimination, and harms to digitally underserved populations. The risk assessment is context-specific rather than based on generic global risk grids.
The guidelines also explicitly state that “a separate AI law is not needed at this stage.” Instead, the framework adapts existing legal structures - the IT Act, DPDP Act, Consumer Protection Act, and sectoral regulations. This is a deliberate choice: as EY India put it, it’s a “conscious bet on innovation” that avoids heavy pre-approval mechanisms while emphasising human accountability.
NASSCOM described the approach as a “balanced and innovation-centred blueprint that prioritises coordination over control” - an “agile, principle-based approach that supports technological advancement while managing risks through evidence-driven, pragmatic tools.”
The Institutional Architecture
The guidelines don’t just state principles - they propose institutional structures to enforce them. Three new bodies are proposed:
flowchart TB
subgraph Governance["PROPOSED AI GOVERNANCE ARCHITECTURE"]
AIGG["AI Governance Group<br/>(AIGG)"]
TPEC["Technology & Policy<br/>Expert Committee (TPEC)"]
AISI["AI Safety Institute<br/>(AISI)"]
end
AIGG --> |"Policy coordination<br/>across ministries"| Ministries["Sectoral<br/>Regulators"]
TPEC --> |"Specialist technical<br/>and policy advice"| AIGG
AISI --> |"Testing standards,<br/>safety research,<br/>risk assessment"| TPEC
Ministries --> RBI["RBI<br/>(Financial)"]
Ministries --> SEBI["SEBI<br/>(Markets)"]
Ministries --> NMC["NMC<br/>(Healthcare)"]
Ministries --> Others["Other<br/>Regulators"]
style Governance fill:#1e293b,stroke:#475569,color:#e2e8f0
The AI Governance Group (AIGG) coordinates policy across ministries. This is the body that prevents the coordination failures plaguing the EU AI Act implementation. Instead of each regulator independently interpreting AI rules, AIGG ensures consistency.
The Technology and Policy Expert Committee (TPEC) provides specialist advice. This is where technical expertise meets policy design - a gap that has undermined AI governance efforts in nearly every other jurisdiction.
The AI Safety Institute (AISI) focuses on testing standards, safety research, and risk assessment. Think of it as India’s version of the UK AISI, but embedded within a governance framework that actually connects it to regulatory enforcement.
The National AI Incident Database
One proposal that deserves more attention: India plans to operationalise a National AI Incidents Database as part of the medium-term implementation.
This database will record, classify, and analyse safety failures, biased outcomes, and security breaches nationwide. Reports will come from public bodies, private entities, researchers, and civil society organisations. Crucially, the system is designed as non-punitive - focused on learning from actual incidents rather than punishing hypothetical harms.
Think of it as India’s version of aviation incident reporting, applied to AI. When a lending model exhibits caste-based bias in production, or a diagnostic AI misclassifies a condition, or a deepfake causes financial fraud - these incidents get recorded, analysed, and fed back into governance standards.
The database draws on the OECD AI Incident Monitor but is adapted for India’s sectoral realities. This is exactly the kind of infrastructure that makes principles actionable - without it, the sutras remain aspirational. With it, governance becomes evidence-based.
What This Means for Enterprises: A Practical Checklist
If you’re deploying AI systems in India, here’s what you should be doing now - not when regulation becomes mandatory, but now.
Immediate Actions (Next 90 Days)
1. Map your AI systems to risk tiers. The guidelines use a graded approach. Identify which of your systems are high-risk (lending decisions, healthcare diagnostics, hiring) vs. low-risk (content recommendations, internal analytics).
2. Establish an AI governance committee. The guidelines reference setting up a responsible AI officer or committee within organisations. If you don’t have one, create one. If you have one that only meets quarterly, make it monthly.
3. Document your AI value chain. For each AI system, document who developed it, who deployed it, what data it uses, and who’s accountable for its outputs. The graded liability regime requires you to know your supply chain.
4. Begin India-specific fairness testing. Don’t rely on off-the-shelf Western bias benchmarks. Build test sets that reflect Indian diversity dimensions - caste, language, region, economic status, urban/rural divide.
Medium-Term Actions (6-12 Months)
5. Implement human oversight mechanisms. For high-risk systems, ensure there’s a documented process for human review, override, and escalation. This isn’t optional under the People First sutra.
6. Build explainability for high-stakes decisions. For lending, insurance, hiring, and healthcare AI, implement explanation mechanisms that can articulate why a specific decision was made for a specific individual.
7. Set up continuous monitoring. Point-in-time audits won’t satisfy the Trust sutra. Implement drift detection, fairness monitoring, and performance tracking that runs continuously.
8. Prepare for the AI Safety Institute. When AISI establishes testing standards, you’ll need to comply. Start building internal testing capabilities now - adversarial testing, red-teaming, robustness evaluation.
ENTERPRISE READINESS FRAMEWORK
Map
Risk-tier your AI systems
Govern
Establish AI committee
Test
India-specific fairness
Monitor
Continuous oversight
How India Compares: The Global Governance Landscape
The seven sutras framework sits in a unique position globally. Let me map it against the other major approaches.
The EU AI Act is prescriptive and risk-based. It classifies AI systems into risk categories and imposes specific requirements at each level. The approach is comprehensive but complex - and implementation is already struggling with coordination problems across 27 member states and multiple regulatory bodies.
The US approach remains fragmented. Executive orders come and go with administrations. There’s no federal AI legislation, and the state-level patchwork creates compliance nightmares for enterprises operating nationally.
China’s approach is sector-specific and enforcement-heavy. They’ve moved fast on deepfake regulation, algorithmic transparency, and generative AI rules. But the approach is top-down and leaves little room for industry input.
India’s approach is principle-based and adaptive. Rather than prescribing specific requirements, the sutras provide a framework that can be interpreted and applied across sectors. This is both the strength and the risk: it gives flexibility but requires sectoral regulators to translate principles into concrete rules.
Global AI Governance Comparison
The Economic Context
This isn’t just about governance for governance’s sake. PwC India estimates that AI has the potential to contribute USD 550 billion to five priority sectors - agriculture, education, energy, healthcare, and manufacturing - by 2035. Getting governance right isn’t about slowing innovation down. It’s about making sure that half-trillion-dollar opportunity actually materialises without creating systemic risks.
At the AI Impact Summit, PM Modi declared: “We are at the dawn of the AI age that will shape the course of humanity.” IT Minister Ashwini Vaishnaw put it more pointedly: “Innovation without trust is a liability.”
That’s the core tension the seven sutras are trying to resolve. India needs AI to leapfrog its development challenges. But it also needs AI that doesn’t discriminate against 600 million rural citizens, doesn’t amplify existing inequalities, and doesn’t create systemic risks in financial services that serve 1.4 billion people.
India is now in talks with over 30 nations on AI regulation frameworks. The seven sutras aren’t just a domestic document - they’re a template that India is positioning for global adoption, particularly among Global South nations that face similar challenges of deploying AI at population scale with limited institutional infrastructure.
The Honest Assessment
Let me be direct about both the strengths and weaknesses of this framework.
What India got right:
The principle-based approach avoids the EU’s mistake of trying to regulate technology that changes faster than legislation can be updated. The graded liability regime acknowledges that AI governance isn’t one-size-fits-all. The institutional architecture (AIGG, TPEC, AISI) provides coordination mechanisms that the US and EU are still struggling to build. And adapting from the RBI’s tested framework means these principles have already survived contact with reality.
What could go wrong:
Principles without enforcement are suggestions. The guidelines are currently non-binding - they’re governance guidelines, not law. If sectoral regulators don’t translate these sutras into concrete, enforceable rules with real penalties, the framework risks becoming another beautifully written document that changes nothing.
The 2,500+ public consultation submissions show strong engagement. But engagement doesn’t equal compliance. The real test comes when a major enterprise violates a sutra and the question becomes: what happens next?
The other risk is capacity. Indian regulators have domain expertise but often lack AI technical capacity. The TPEC and AISI are designed to fill this gap, but they don’t exist yet. Until they’re staffed and operational, there’s a governance gap between principles and practice.
What Happens Next
The AI Impact Summit this week (February 16-20, 2026) is the platform where these guidelines go global. With PM Modi delivering the inaugural address, Sam Altman and Sundar Pichai in attendance, and over 600 startups participating, the sutras are getting unprecedented visibility.
But visibility isn’t implementation. Here’s what to watch for in the next 12 months:
-
Sectoral translation. Will regulators like RBI, SEBI, and IRDAI issue sector-specific guidance based on the seven sutras? The RBI is already ahead with FREE-AI, but others need to follow.
-
AISI establishment. When does the AI Safety Institute become operational? This is the body that will define testing standards and make the Safety sutra enforceable.
-
Enforcement precedent. The first significant enforcement action under these guidelines will set the tone. Will it be a slap on the wrist or a meaningful penalty?
-
International adoption. Will other Global South nations adopt similar frameworks? India is actively positioning the sutras as a template.
For enterprises, the message is clear: don’t wait for these guidelines to become mandatory. The direction is set. The institutions are being built. The international pressure is mounting. Start implementing now, and you’ll be ahead of the curve when principles become regulations.
The seven sutras aren’t just governance principles. They’re India’s bid to prove that you can have both rapid AI innovation and responsible deployment - not as competing objectives, but as reinforcing ones.
Whether India pulls it off will determine not just the country’s AI future, but potentially the governance model for billions of people across the Global South who are watching this experiment closely.