On February 10, 2026, MeitY notified The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. These rules take effect on February 20, 2026 - ten days from notification to enforcement.

That’s not a typo. Ten days.

India just became the first major economy to establish an enforceable legal framework specifically targeting deepfakes and synthetic media. Not voluntary guidelines. Not industry codes of practice. Enforceable rules with criminal penalties, three-hour takedown windows, and mandatory AI content labeling across every major social media platform operating in the country.

If you’re a platform, an enterprise using AI-generated content, or a content creator who uses AI tools, this affects you directly. And you have very little time to comply.

Let me break down exactly what these rules require, what they mean, and what you should be doing right now.


What Changed: The Key Provisions

The amendment modifies the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Here are the provisions that matter.

Rule 2(1)(wa) introduces the legal definition of SGI: audio, visual, or audio-visual content that a computer resource artificially or algorithmically creates or alters in a way that “appears to be real, authentic or true” and depicts an individual or event in a manner likely to be perceived as “indistinguishable” from a natural person or real-world event.

This is a deliberately broad definition. It covers deepfake videos, AI-generated images, voice clones, synthetic text that impersonates real people, and manipulated media of any kind. The threshold is perception - if it could be mistaken for real, it’s SGI.

flowchart TB
    subgraph SGI["WHAT COUNTS AS SGI?"]
        direction TB
        V["Video<br/>Deepfake faces, body swaps,<br/>synthetic scenes"]
        A["Audio<br/>Voice clones, synthetic speech,<br/>manipulated recordings"]
        I["Images<br/>AI-generated photos,<br/>face swaps, altered images"]
        T["Text<br/>AI impersonation,<br/>synthetic profiles"]
    end

    subgraph Test["THE LEGAL TEST"]
        Q1["Artificially or algorithmically<br/>created or altered?"]
        Q2["Appears real, authentic,<br/>or true?"]
        Q3["Could be perceived as<br/>indistinguishable from reality?"]
    end

    SGI --> Q1 --> Q2 --> Q3
    Q3 --> |"All three: Yes"| Regulated["REGULATED<br/>under IT Rules 2026"]
    Q3 --> |"Any one: No"| NotSGI["NOT SGI<br/>(but other laws may apply)"]

    style SGI fill:#1e293b,stroke:#475569,color:#e2e8f0
    style Test fill:#1e3a5f,stroke:#3b82f6,color:#e2e8f0
    style Regulated fill:#7f1d1d,stroke:#ef4444,color:#e2e8f0
    style NotSGI fill:#064e3b,stroke:#10b981,color:#e2e8f0

Compressed Takedown Timelines

The compliance windows have been slashed across the board:

New Takedown Timelines

General content removal (court/govt order) 36 hours 3 hours
Non-consensual intimate imagery (incl. deepfake nudity) 24 hours 2 hours
Grievance Appellate Committee orders 24 hours 2 hours

For context, the US TAKE IT DOWN Act requires 48 hours for non-consensual intimate content. India’s two-hour window is the most aggressive takedown timeline in any major jurisdiction globally.

This is operationally brutal. Platforms like Meta, X, YouTube, WhatsApp, and Telegram need 24/7 content moderation teams with the authority to act within minutes of receiving a takedown order. Automated detection systems aren’t optional anymore - they’re the only way to meet these timelines at scale.

SSMI-Specific Obligations: Rule 4(1A)(a)

Significant Social Media Intermediaries (SSMIs) - platforms with over 5 million registered users in India - face additional requirements. Users uploading content to SSMIs must declare whether it constitutes SGI. But crucially, SSMIs cannot simply rely on user declarations. They must deploy their own technical measures to verify those declarations independently.

This is a meaningful distinction. A user claiming “this is not AI-generated” doesn’t absolve the platform. The platform must have independent detection capability.

Mandatory Labeling and Metadata: Rule 3(3)(a)

This is the core compliance framework. Intermediaries that offer tools enabling the creation or distribution of SGI must:

  1. Prominently label the content in a manner that is easily noticeable and adequately perceivable. For audio content, this means a prominently prefixed audio disclosure.

  2. Embed permanent metadata or other appropriate technical provenance mechanisms, including a unique identifier that identifies the intermediary’s computer resource used to create the content.

Anti-Tampering: Rule 3(3)(b)

Platforms must not enable the modification, suppression, or removal of labels or provenance markers. The MeitY FAQs specifically call out features like removing watermarks or exporting without metadata as examples of what platforms should not offer.

This is significant. Instagram filters that strip metadata, video editors that remove watermarks, export features that lose provenance - all of these become compliance violations.

User Awareness: Rule 3(1)(c) and Rule 3(1)(ca)

Platforms must inform users at least once every three months about compliance requirements and the consequences of misusing AI tools. For platforms offering AI creation tools specifically, Rule 3(1)(ca) adds an obligation to warn users that misusing these resources to create unlawful SGI may attract criminal penalties under the Bharatiya Nyaya Sanhita, 2023, or the POCSO Act.

Platform Detection Obligations: Rule 3(3)

Here’s the provision that’s going to cost platforms real money. Intermediaries must deploy “reasonable and appropriate technical measures,” including automated tools, to prevent users from creating or sharing SGI that violates any law.

The user’s declaration alone won’t be relied upon. Platforms must use their own technology - AI detection tools - to verify that user-provided information about content provenance is accurate.

This means platforms can’t just add a checkbox asking “Is this AI-generated?” They need independent detection capability.


The Penalty Landscape

The IT Rules 2026 themselves don’t create new penalties - they create new obligations. But violations of these obligations trigger consequences under multiple existing laws.

Penalty Framework for Deepfake Violations

BNS Section 353: Misinformation / Public Mischief

Up to 3 years imprisonment + fine. Covers statements causing fear, alarm, or inciting enmity using synthetic content.

BNS Section 336: Digital Forgery / Impersonation

Up to 2 years imprisonment or fine or both. Covers AI-generated impersonation of real individuals.

BNS Section 356: Criminal Defamation

Improved provisions for online defamation via deepfake content.

BNS Section 111: Organised Cyber Crime

Covers coordinated deepfake campaigns. Enhanced penalties for organised operations.

DPDP Act 2023: Data Protection Violations

Non-consensual deepfake use classified as breach. Fines up to Rs 250 crore.

Safe harbour clarification: Rule 2(1B) clarifies that intermediaries don’t lose safe harbour protection under Section 79 of the IT Act when they remove or disable access to SGI content in compliance with the rules. This gives platforms legal cover for proactive takedowns - and removes the excuse of “we can’t act without a court order.”


Why Now: The Deepfake Crisis in India

These rules didn’t emerge in a vacuum. India has experienced a surge of deepfake-related incidents that made regulatory action politically inevitable.

The Scale of the Problem

The numbers tell the story: deepfake-related cybercrime cases in India increased by 550% since 2019, with projected deepfake-related fraud losses reaching Rs 70,000 crore. Businesses lost an average of nearly $500,000 per deepfake-related incident in 2024, with large enterprises averaging $680,000. Financial losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone.

And it’s not just enterprises. McAfee found that 90% of Indians have encountered fake or AI-generated celebrity endorsements, with victims losing an average of Rs 34,500 to such scams.

Financial Fraud: Specific Cases

Pune startup fraud (2024): A Pune-based company lost Rs 1.8 crore after a finance team member received what appeared to be a video call from the company’s UK-based founder. The “founder” spoke fluently, requested an emergency payment. The call lasted three minutes. The funds were transferred in five. It was a deepfake.

Deutsche Bank VP fraud: A Vice President at Deutsche Bank India was conned into transferring Rs 1.08 crore after receiving a deepfake video call impersonating the bank’s global CEO. Pune Police cyber cell registered a case investigating whether the deepfake originated domestically or through international syndicates.

Arup Engineering ($25M): Globally, the UK-based Arup lost $25 million to a deepfake video conference attack where fraudsters impersonated multiple senior executives simultaneously on a single call.

Voice Cloning

Voice cloning has become particularly dangerous in India. In January 2025, a play school owner in Indore lost her entire savings of Rs 97,500 after receiving a call from what sounded exactly like her cousin - a UP police officer. The cloned voice claimed a friend needed emergency cardiac surgery and provided a QR code for payment. It was Madhya Pradesh’s first recorded AI voice cloning fraud.

McAfee’s research paints a grim picture: 47% of Indian adults have experienced or know someone who experienced an AI voice-cloning scam - nearly double the global average of 25%. 69% of Indians say they can’t distinguish a cloned voice from a real one. And 83% of Indian victims suffered monetary loss.

Celebrity and Political Deepfakes

In November 2023, a viral deepfake video superimposed actress Rashmika Mandanna’s face onto another person - she was subsequently appointed national ambassador for cyber safety. In January 2024, a deepfake of Sachin Tendulkar promoted a fake gaming app. In April 2024, both Aamir Khan and Ranveer Singh filed police complaints after deepfake videos showed them endorsing a political party. In 2025, Aishwarya Rai filed a personality rights lawsuit against tech giants for profiting from deepfake content.

The courts have responded. The Delhi High Court issued India’s first blanket personality rights order protecting Amitabh Bachchan (2022) and granted an omnibus injunction for Anil Kapoor (2023) restraining 16 entities from using his likeness via AI or face morphing.

IT Minister Ashwini Vaishnaw highlighted the stakes at the AI Impact Summit 2026: India is now in talks with ministers from over 30 countries on technical and legal solutions for deepfake misuse. The IT Rules 2026 Amendment positions India as a first mover on enforceable deepfake regulation.


What Platforms Must Do

Let me get specific about compliance requirements for platforms operating in India.

BEFORE FEB 20, 2026

  • 36-hour takedown window
  • No SGI definition in law
  • No mandatory AI labeling
  • No metadata requirements
  • No detection obligations
  • Quarterly user warnings (general)

AFTER FEB 20, 2026

  • 3-hour takedown window
  • Legal definition of SGI
  • Mandatory prominent labels
  • Permanent metadata + provenance
  • Automated detection required
  • Criminal penalty warnings for AI tools
  • Anti-tampering safeguards

Detection Technology

Platforms must deploy “reasonable and appropriate technical measures, including automated tools.” This is deliberately technology-neutral - MeitY isn’t prescribing specific watermarking technologies or standards. But the requirement is clear: you need detection capability, and “we rely on user self-reporting” isn’t sufficient.

The challenge here is that deepfake detection is an arms race. Current detection tools achieve 90-95% accuracy on known generation methods but struggle with novel techniques. The rules acknowledge this by using “to the extent technically feasible” language for metadata requirements. But the obligation to deploy detection tools is not qualified - it’s mandatory.

Content Provenance

Every piece of SGI must carry:

  • A visible label (text overlay, badge, or audio disclosure)
  • Embedded metadata that identifies provenance
  • A unique identifier linking to the platform or tool that created it

And these markers cannot be strippable. Platforms must ensure their tools don’t offer features that remove watermarks or export without metadata.

This has implications for platform features. Video download buttons, screenshot tools, export functions - all need to preserve provenance markers. That’s a significant engineering challenge for platforms that have built features specifically designed to make sharing frictionless.


What Enterprises Must Do

If you’re an enterprise using AI-generated content in India - marketing materials, customer communications, internal training, product demos - these rules apply to you too.

If You Create AI Content

Any AI-generated content that could be perceived as depicting real people or events needs labeling. This includes:

  • Marketing videos using AI avatars or synthetic voices
  • Product demos with AI-generated imagery
  • Customer service bots that use voice synthesis
  • Internal training materials with AI-generated scenarios

The key test is whether the content “appears to be real, authentic or true” and could be perceived as “indistinguishable” from reality. If your AI avatar looks like a real person, it needs a label.

If You Use AI Detection

If your enterprise uses AI to detect fraud, verify identity, or moderate content, you need to ensure your detection capabilities can identify SGI. The rules create an implicit expectation that organisations deploy reasonable technical measures against synthetic content.

Data Protection Implications

Under the DPDP Act 2023, non-consensual deepfake use of personal data - including synthetic generation of someone’s likeness - constitutes a data protection breach with fines up to Rs 250 crore. If your AI system generates content using personal data without consent, you face penalties under both the IT Rules and the DPDP Act.


What Content Creators Must Do

If you use AI tools to create content - and let’s be honest, most content creators do now - here’s what changes.

Declare AI usage. When posting content, you must clearly state whether the video, image, or text was created with AI assistance. This isn’t optional.

Understand criminal exposure. Rule 3(1)(ca) explicitly warns that misusing AI creation tools to produce unlawful SGI may attract criminal penalties under the Bharatiya Nyaya Sanhita (up to 3 years imprisonment) or the POCSO Act (for content involving minors).

Don’t strip metadata. If your AI tool embeds provenance markers, don’t use third-party tools to remove them before posting. That violates the anti-tampering provisions.

Satire and parody are grey areas. The rules don’t explicitly carve out exceptions for satire, parody, or artistic expression. If your satirical deepfake of a politician could be “perceived as indistinguishable” from reality, it may fall under SGI regulation. This is one of the most contentious aspects of the rules, and legal challenges are likely.


The Criticism: What’s Wrong with These Rules

These rules aren’t without problems, and it’s worth being honest about the concerns.

The Three-Hour Window Is Operationally Unrealistic

For smaller platforms and messaging services, a three-hour takedown window is extremely challenging. Large platforms like Meta and Google have 24/7 moderation teams. But smaller Indian platforms, regional language services, and encrypted messaging apps face genuine operational difficulties.

WhatsApp is particularly problematic. End-to-end encryption means the platform can’t scan content. How do you comply with detection obligations when you architecturally cannot see the content? This tension between encryption and content moderation remains unresolved.

Detection Technology Isn’t Ready

Current deepfake detection tools have significant limitations:

  • They work best on known generation methods and struggle with novel techniques
  • Detection accuracy drops significantly for non-English content and Indian faces
  • Real-time detection at scale remains computationally expensive
  • Adversarial techniques specifically designed to evade detection are improving faster than detection itself

Mandating detection deployment when the technology has known limitations creates compliance risk for platforms that deploy tools in good faith but miss sophisticated deepfakes.

Free Expression Concerns

The Internet Freedom Foundation has raised constitutional concerns about the rules. The broad definition of SGI, combined with the three-hour takedown window and criminal penalties, creates a chilling effect on legitimate expression. Political satire, artistic expression, and journalism that uses AI tools could all face over-compliance - platforms removing legitimate content to avoid liability.

The lack of explicit carve-outs for satire, parody, research, and journalism is a significant gap.

Startup Impact

Indian startups building AI content tools face disproportionate compliance burdens. The costs of implementing detection technologies, labeling systems, and metadata infrastructure add up quickly. India Tech Desk reported that these regulations could “burden startups further” at a time when India is trying to position itself as an AI innovation hub.


Global Context: How India Compares

India isn’t the only country tackling deepfakes, but it’s the first major economy to have enforceable rules specifically targeting synthetic media.

EU AI Act (August 2026): The EU requires labeling of AI-generated content but with a longer implementation timeline. The EU’s Code of Practice on marking and labelling is still being developed, with mandatory compliance expected from August 2026 - six months after India’s rules take effect.

US: The federal TAKE IT DOWN Act (May 2025) criminalises non-consensual intimate deepfakes with up to 3 years in prison and a 48-hour platform takedown. The DEFIANCE Act (January 2026) provides civil remedies with damages up to $250,000. But beyond these narrow federal laws, it’s a patchwork: 47 states now have deepfake laws (64 new laws enacted in 2025 alone), primarily targeting intimate imagery and election manipulation. No federal detection or labeling mandates exist.

China: Has had algorithmic transparency and deepfake regulations since 2023, making it the earliest mover. China’s approach requires providers of “deep synthesis” services to label content and maintain logs. India’s approach is broader in scope but less prescriptive about specific technical requirements.

South Korea: Criminalised non-consensual deepfake pornography in 2024 with penalties up to 5 years imprisonment. Focused scope but strong enforcement.

Global Deepfake Regulation Timeline

2023
China — Deep Synthesis Regulations (first mover)
2024
South Korea — Deepfake pornography criminalised (up to 5 yrs)
May 2025
US — TAKE IT DOWN Act (intimate deepfakes, 48-hr takedown)
Feb 2026
India — IT Rules 2026 Amendment (SGI framework, 2-3 hr takedown)
Aug 2026
EU — AI Act labeling requirements (mandatory compliance)

India’s approach is notable for its breadth (covering all SGI, not just specific categories), its speed (ten days from notification to enforcement), and its enforcement mechanism (leveraging existing criminal law rather than creating new penalties).


What Happens Next

Immediate (February 20 - March 2026)

Platforms will scramble to update terms of service, deploy labeling mechanisms, and establish three-hour takedown workflows. Expect visible changes on Instagram, YouTube, and X in terms of AI content labels within weeks.

Short-Term (Q2 2026)

The first enforcement actions will set the tone. Will MeitY go after platforms that miss the three-hour window? Will criminal prosecutions follow for malicious deepfake creators? The answer will determine whether these rules have teeth.

Medium-Term (H2 2026)

Legal challenges are likely. The constitutionality of the three-hour takedown window, the breadth of the SGI definition, and the adequacy of free expression protections will all be tested in court.

Platform-level AI detection will improve rapidly as compliance pressure drives investment. Content provenance standards will begin to converge as the industry finds common approaches to labeling and metadata.

Long-Term

India’s approach will influence other Global South nations. With India in talks with 30+ countries on AI regulation, the IT Rules 2026 template is likely to be adapted by jurisdictions facing similar challenges with synthetic media.


The Bottom Line

The IT Rules 2026 Amendment is a significant regulatory intervention. It’s imperfect - the three-hour window is aggressive, the detection technology isn’t mature, and the free expression implications need resolution. But it addresses a genuine crisis.

Deepfake fraud is costing Indian businesses crores. Political deepfakes threaten democratic processes. Non-consensual intimate imagery ruins lives. The alternative to imperfect regulation wasn’t perfect regulation - it was no regulation, and the status quo was untenable.

For enterprises: audit your AI content workflows now. Identify where you create, distribute, or host synthetic media. Implement labeling. Update your terms of service. Train your teams on the new obligations.

For platforms: the three-hour clock is ticking. Deploy detection, build takedown workflows, and document your compliance processes. The cost of non-compliance - loss of safe harbour, criminal liability, DPDP Act fines - far exceeds the cost of implementation.

For content creators: declare your AI usage, keep your provenance markers intact, and understand that the line between creative AI use and criminal misuse just got a lot sharper.

February 20, 2026 isn’t a deadline. It’s the starting line for a new era of synthetic media regulation in India. Whether these rules strike the right balance between safety and expression will unfold over the coming months and years. But the direction is set, and the enforcement clock is running.