Government of India
  • Skip to Main Content
  • Screen Reader

Deepfakes in India: Legal Landscape, Judicial Responses, and a Practical Playbook for Enforcement

India’s escalating proliferation of hyper-realistic “deepfake” audio-visual content, generated through advanced artificial intelligence techniques such as Generative Adversarial Networks (GANs), has intersected profoundly with electoral processes, reputational damages, and gendered forms of abuse. With over 850 million internet users [1]and the world’s largest democracy, India faces acute risks from deepfakes, which can manipulate perceptions on a grand scale, undermine public trust, and exacerbate societal divisions. Their misuse, spanning misinformation, defamation, financial fraud, and national security threats, has precipitated a global crisis of trust in digital information, compelling courts, regulatory authorities, and digital platforms to navigate uncharted terrains. Whilst deepfake technology holds legitimate applications in education, entertainment, accessibility, and scientific research, its pernicious potential necessitates robust countermeasures.

This comprehensive article delineates India’s evolving legal architecture, pivotal judicial pronouncements, and pragmatic strategies for victims and intermediaries, whilst underscoring constitutional safeguards and emergent regulations aimed at countering misinformation and synthetic media. It delves into multifaceted dimensions, ethical quandaries surrounding algorithmic fairness, perils of social engineering, and technological approaches to privacy engineering, drawing upon game-theoretic privacy models and data protection by design principles. Attuned to developments, it integrates synergies with the Digital Personal Data Protection Act [2](DPDPA) 2023, the Bharatiya Nyaya Sanhita (BNS) 2023[3], recent judicial edicts, and parliamentary deliberations, serving as an authoritative guide for the public, platforms, policymakers, and enforcement agencies.

Key Messages

  1. Deepfakes are illegal when used to defame, impersonate, deceive, or depict sexual content without consent, with particular gravity during elections.[4]
  2. Intermediaries must act within 36 hours to remove harmful deepfake content and apply permanent, machine-readable labels to AI-generated media.[5]
  3. Political parties must remove deepfake posts within 3 hours during the Model Code of Conduct (MCC) period, per Election Commission of India (ECI) directives.[6]
  4. Content blocking must adhere to due process under Section 69A of the Information Technology Act, 2000 (IT Act), per Shreya Singhal v. Union of India (2015).[7]
  5. The Fact Check Unit (FCU) Rule was struck down in September 2024 by the Bombay High Court for unconstitutionality.[8]
  6. Since 1 July 2024, India’s criminal and evidence laws have been modernised: the BNS replaces the Indian Penal Code (IPC), the Bharatiya Nagarik Suraksha Sanhita (BNSS) replaces the Criminal Procedure Code (CrPC), and the Bharatiya Sakshya Adhiniyam (BSA) succeeds the Evidence Act.

A.  Why Deepfakes Matter in India

Deepfakes pose existential threats to democratic integrity, societal cohesion, and individual dignity, amplified by India’s vast digital populace and diverse socio-cultural fabric.

1.  Democratic Harm

During electoral cycles, synthetic clips can skew public discourse, precipitate unwarranted content removals, or incite communal unrest. Judicial authorities emphasise that speech restrictions must align with Article 19(2) [Protection of certain rights regarding freedom of speech, etc.] of the Constitution[9], covering sovereignty, public order, defamation, and decency. Deepfakes amplify disinformation, erode confidence in digital media, and facilitate propaganda exploiting cultural or regional biases, with profound social ramifications. Ethically, AI systems trained on imbalanced datasets perpetuate inequities, raising concerns of algorithmic bias. In 2025, fabricated videos during state elections depicting political figures inciting violence prompted swift interventions by the Indian Cyber Crime Coordination Centre (I4C), underscoring risks to democratic processes.

2.  Gendered Abuse

Non-consensual sexual deepfakes disproportionately target women, necessitating expeditious removal under the IT Rules 2021, with non-compliance risking safe harbour loss. Unauthorised scraping of biometric data, such as facial imagery, violates data protection tenets, enabling sextortion and cyberbullying, dimensions fraught with privacy and ethical implications. Socially, they perpetuate gender-based violence in virtual realms, inflicting psychological trauma. The National Cyber Crime Reporting Portal reports deepfakes complaints against women in 2025[10], with over 90% of global deepfakes being pornographic, a trend mirrored in India[11]. Parliamentary reports urge intersectional interventions for marginalised communities.

3.  Platform Accountability

Intermediary immunity hinges on rigorous “due diligence,” including prompt content removal and grievance redressal. Platforms must integrate privacy engineering tools like metadata watermarking and AI-driven detection to ensure provenance, balancing utility and protection via game-theoretic frameworks. The CERT-In November 2024 advisory [12]advocates multi-factor authentication and AI detection apps to counter social engineering, with ongoing trials in 2025.

4.  Security, Economic, and Cultural Risks

Deepfakes threaten national security by enabling propaganda or impersonation of officials, imperilling diplomatic relations. Economically, voice deepfakes facilitate fraudulent transactions, posing risks to banking and e-governance. Culturally, in India’s multilingual milieu, manipulated content can inflame communal tensions and hate speech, precipitating violence and eroding societal trust.

B.  Constitutional and Statutory Spine

India’s response to deepfakes rests on a robust constitutional and statutory edifice mandating proportionality, necessity, and fairness.

1.  Shreya Singhal v. Union of India (2015)[13]

The Supreme Court invalidated Section 66A of the IT Act for vagueness but upheld Section 69A’s blocking framework for its procedural safeguards aligned with Article 19(2). This illuminates ethical tensions in algorithmic accountability, ensuring AI outputs respect rights without due process, spanning ethical and regulatory spheres.

2.  Section 69A IT Act + 2009 Blocking Rules[14]

The Centre may block information for security or public order, with recorded reasons and committee oversight, reaffirmed post-Shreya Singhal. Privacy engineering via Data Protection by Design (DPbD) anonymises data flows to prevent leaks.[15]

3.  Section 79 Safe Harbour + IT Rules, 2021

Immunity requires due diligence: user guidelines, disabling illicit content, and grievance systems. Non-compliance invokes Rule 7 [Committee for examination of request] liability. Ethically, this probes platform stewardship in ‘data privacy games,’ ensuring impartial moderation to forestall biases.

4.  Emerging Contest on Content Takedowns

X Corp challenges Section 79(3)(b) [Exemption from liability of intermediary in certain cases.] misuse for removals, arguing that only Section 69A [Power to issue directions for blocking for public access of any information through any computer resource.] provides safeguards. This Karnataka High Court case underscores deepfake governance: takedowns must follow due process. Global frameworks like the EU AI Act may inform trans-jurisdictional oversight.

C.  Integration with DPDPA 2023 and BNS 2023

The DPDP Act 2023 [Section 6.Consent] mandates consent for personal data processing, classifying non-consensual deepfake use as breaches with fines up to ₹250 crore, complementing Section 79 via data minimisation and fiduciary duties. The BNS 2023, effective from 1 July 2024, modernises offences; Section 353 [Statements conducing to public mischief.] penalises misinformation threatening public order (up to 3 years imprisonment); Section 111 [Organised crime.] targets organised cybercrimes; Section 319 [Cheating by personation.] addresses personation; Section 336 [Forgery.] covers electronic forgery; and Section 356 [Defamation.] extends to synthetic media defamation. Section 67 [Punishment for publishing or transmitting obscene material in electronic from.] of the IT Act persists for obscene content. The BSA 2023[16]’s Section 63 [Admissibility of electronic records] requires electronic record authentication certificates, including hash and source verification, critical for deepfake evidence admissibility. These address privacy paradoxes, bolstering ethical and technological defences.

Statute Provisions vis-à-vis Deepfakes Penalties Scope
IT Act (Sections 66E, 67, 79) Non-consensual explicit material, safe harbour loss 3-5 years imprisonment, fines Obscenity, voyeurism, intermediary liability
DPDPA 2023 (Sections 6, 33) Consent for data (e.g., biometrics), breaches Fines up to ₹250 crore Data processing, fiduciary breaches
BNS 2023 (Sections 111, 319, 336, 353, 356) Cybercrimes, personation, forgery, misinformation, defamation Up to 3-5 years imprisonment, fines Fraud, incitement, synthetic media
BSA 2023 (Section 63) Authentication certificate for records Evidence rejection if non-compliant Digital proof admissibility

D.  Regulatory Push on Deepfakes

Regulatory efforts have intensified, operationalising proactive measures.

a.  November 2023 Advisory[17]

MeitY mandated platforms to remove deepfakes within 36 hours, framing them as rights violations, especially for women, risking safe harbour loss. This addresses gendered harms but risks overreach (social dimension).

b.  March 2024 Advisory[18]

Platforms must embed persistent labels/metadata in synthetic content for originator tracing, with compliance reports within 15 days. This aligns with privacy by design, advocating differential privacy against re-identification.

c.  CERT-In November 2024 Advisory[19]

Focussing on fraud, it recommends AI/ML detection tools by C-DAC, source verification, watermarking, and C2PA adoption, with 2025 trials ongoing.

d.  ECI Directives (May 2024)[20]

Political parties must remove deepfake posts within 3 hours during MCC enforcement, prohibiting AI misuse for misinformation.

Advisory Chronology Mandates Issuer
November 2023 Remove deepfakes in 36 hours MeitY
March 2024 Labels/metadata, compliance reports MeitY
May 2024 Remove deepfakes in 3 hours (elections) ECI
November 2024 Detection tools, C2PA, user safeguards CERT-In

Advisories crystallise IT Rules diligence, making identification, labelling, and takedowns integral to safe harbour. Ethically, they avert ‘consent paradoxes.’ The August 2025 Parliamentary Committee urges explicit deepfake prohibitions.[21]

E.  FCU and Fact-Checking: The Courts Draw Lines

The 2023 FCU amendment for government fact-checking was stayed by the Supreme Court (21 March 2024) [22]and struck down by the Bombay High Court (26 September 2024) as unconstitutional—overbroad, vague, and chilling speech. This mandates judicial oversight in moderation, ensuring deepfake countermeasures are calibrated, rule-governed, and reviewable, shielding against state misinformation while addressing electoral harms. This precedent shapes 2025 jurisprudence.

F.  Key Litigation Over Takedown Architecture

1.  X Corp v. Union of India (Karnataka High Court)

X Corp challenges Section 79(3)(b) misuse, arguing only Section 69A allows blocking. Sub-judice as of September 2025, July hearings defended the SAHYOG Portal as flagging, not ordaining, content, sans interim relief.[23] The Centre upholds powers for decency, defamation, or contempt. The verdict will recalibrate deepfake takedowns, with GDPR’s ‘right to be forgotten’ informing erasure mandates.[24]

2.   Delhi High Court Rulings

  1. May 2025: John Doe injunction for Ankur Warikoo against AI/deepfake fraud.[25]
  2. May/June 2025: Dynamic+ injunction for Sadhguru, deeming deepfakes privacy violations.[26]
  3. July 2025: Ordered Meta and X to remove AI-generated obscene content targeting an influencer. [27]These reflect judicial alacrity for gendered harms while upholding due process.

G.  Criminal Law and Civil Remedies

Extant laws address deepfakes, bolstered by BNSS e-FIRs and digital summons.[28]

a.  Defamation and Reputation

BNS Section 356 enables criminal/civil defamation suits for synthetic media, requiring forensic logs. Ethically, this addresses distortions but raises bias concerns.

b.  Sexualised Deepfakes

IT Act Section 67 and BNS provisions tackle non-consensual depictions, with advisories prioritising women’s safety. This combats digital exploitation, highlighting privacy paradoxes.

c.   Impersonation and Fraud

BNS Sections 111, 319, and 336 cover fraud and personation, with I4C noting a case surge in 2025[29]. Differential privacy could preempt fraud.[30]

d.  Platform Exposure

Safe harbour loss intersects with DPDPA assessments; C2PA bridges norms.

H.  Intermediary Due Diligence

Platforms must:

  1. Forge Detection/Provenance: Use ML, watermarking, and C2PA metadata; game-theoretic incentives aid reporting.
  2. Label AI Media: Conspicuous, persistent disclosures mitigate deception.
  3. Refine SLAs: Remove content within 36 hours (3 hours in elections); community moderation counters virality.
  4. Support Victims: Archive logs, provide FIR guidance, and offer legal/psychosocial aid.
  5. Ensure Transparency: GAC mandates 24-72 hour reports on takedowns.

In August 2025 parliamentary panel calls urge mandatory watermarking and CERT-In monitoring.[31]

I.  Victim’s Roadmap

Victims should:

  1. Secure Evidence: Save URLs, screenshots, hashes; use provenance tools.
  2. Lodge Complaints: Use cybercrime.gov.in, helpline 1930, or FIRs, citing defamation or obscenity; collective reporting amplifies impact.
  3. Notify Platforms: Engage reporting channels with URLs and hashes.
  4. Seek Relief: Pursue injunctions and de-indexing.
  5. Invoke CERT-In/I4C: Escalate systemic campaigns; join advocacy networks.

J.  Public Authorities and Law Enforcement

Blocking follows Section 69A; investigations use BNSS e-FIRs; evidence requires BSA Section 63 certificates. CERT-In and ECI coordinate systemic and electoral responses.

K.  Public Law Guardrails

Speech restraints must honour Article 19(2), with recorded reasons. The FCU nullification precludes state truth monopolies. Platforms’ duties persist, embedding differential privacy. Delhi High Court rulings ensure proportionality.

L.  Policy Gaps and Priorities

  1. Bespoke Offence: Criminalise non-consensual deepfakes with mens rea, exempting parody; align with GDPR data minimisation.
  2. Provenance at Scale: Mandate watermarking and co-regulatory codes.
  3. Expedited Pathways: Fast-track tribunals for elections and gendered harms.
  4. Coordination: I4C/SAHYOG SOPs and EU AI Act harmonisation.

TRAI’s AI recommendations, under review, urge the establishment and explicit laws.[32]

M.  International Comparisons

India’s labelling mirrors the EU AI Act; judicial review aligns with Shreya Singhal, unlike Singapore’s POFMA[33]; C2PA adoption is mandated, unlike the US’s voluntary approach[34].

N.  What to Watch in 2025 and Beyond

  1. Karnataka High Court: Pending verdict on Section 79 vs. 69A.
  2. MeitY/CERT-In: Guidance on watermarking and AI models.
  3. AI Litigation: Tests safe harbour for platform-generated content.
  4. Legislation: AIDAI or analogous frameworks for deepfake penalties.
  5. Bottom

Bottom Line India’s judiciary mandates platform alacrity against deepfakes while upholding due process. A stratified regime, Section 69A takedowns, IT Rules diligence, evidence-based pursuits, and provenance technologies, balances empowerment and deception. Ethically, prioritise algorithmic equity; socially, digital literacy; technologically, privacy engineering; regulatorily, global standards.

(i) https://m.economictimes.com/industry/services/retail/only-20-25-of-indias-850-mn-internet-users-shop-online-shows-untapped-potential-mckinsey-report/articleshow/122944287.cms

(ii) https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf

(iii) https://www.indiacode.nic.in/bitstream/123456789/20062/1/a2023-45.pdf

(iv) https://www.drishtiias.com/daily-updates/daily-news-editorials/deepfakes-in-elections-challenges-and-mitigation

(v) https://m.thewire.in/article/tech/india-needs-a-law-to-govern-generative-ai-but-a-blanket-ban-wont-work/amp?utm=relatedarticles

(vi) https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=2019760

(vii) Shreya Singhal v. Union of India, (2015) 5 SCC 1.

(viii) https://www.nishithdesai.com/NewsDetails/15155

(ix) https://www.mea.gov.in/images/pdf1/part3.pdf

(x) https://www.pib.gov.in/PressReleasePage.aspx?PRID=2158408

(xi) https://www.seejph.com/index.php/seejph/article/download/4885/3228/7426

(xii) https://www.cert-in.org.in/s2cMainServlet?pageid=PUBVLNOTES02&VLCODE=CIAD-2024-0060

(xiii) Shreya Singhal v. Union of India, (2015) 5 SCC 1.

(xiv) https://www.meity.gov.in/static/uploads/2024/10/91f628cb778f94e76df356bc3fd3ac60.pdf

(xv) https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4A4579B8FD774F7CDF8A1867A839B5FB/S2632324920000012a.pdf/div-class-title-data-protection-by-design-building-the-foundations-of-trustworthy-data-sharing-div.pdf

(xvi) https://www.mha.gov.in/sites/default/files/2024-04/250882_english_01042024_0.pdf

(xvii) https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542

(xviii)  https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf

(xix)  https://www.cert-in.org.in/s2cMainServlet?pageid=PUBVLNOTES02&VLCODE=CIAD-2024-0060

(xx) https://elections24.eci.gov.in/docs/2eJLyv9x2w.pdf

(xxi) https://www.medianama.com/2025/08/223-parliamentary-committee-deepfake-rules/

(xxii) https://internetfreedom.in/sc-stays-notification-constituting-fcu/

(xxiii)  https://www.business-standard.com/india-news/supreme-court-karnataka-hc-x-twitter-sahyog-portal-censorship-case-125040301122_1.html

(xxiv) https://rgpd.com/gdpr/chapter-3-rights-of-the-data-subject/article-17-right-to-erasure-right-to-be-forgotten/

(xxv) https://s3.courtbook.in/2025/05/delhi-high-court-stops-circulation-of-deepfake-videos-of-youtuber-ankur-warikoo-with-john-doe-order.pdf

(xxvi) https://www.sonisvision.in/blogs/personality%20right%20trademark%20Dynamik%20INjunction

(xxviii) https://www.medianama.com/2025/07/223-delhi-hc-meta-x-ai-generated-porn-social-media-influencer/

(xxviii) https://lawbeat.in/news-updates/delhi-notifies-bnss-rules-2025-whatsapp-and-email-now-valid-for-serving-court-summons-and-warrants-1516120

(xxix) https://www.pib.gov.in/PressReleasePage.aspx?PRID=2154268

(xxx) https://sansad.in/getFile/debatestextmk/18/IV/26.03.2025.pdf?source=loksabhadocs

(xxxi) https://www.medianama.com/2025/08/223-parliamentary-committee-deepfake-rules/

(xxxii) https://telecom.economictimes.indiatimes.com/news/policy/trais-ai-recommendations-for-trustworthy-technology-under-government-review/123464422

(xxxiii)  https://www.pofmaoffice.gov.sg/regulations/protection-from-online-falsehoods-and-manipulation-act/

(xxxiv)  https://c2pa.org/

(This article has been written by Tanmaya Nirmal, TAU, National e-Governance Division. For any comments or feedback, please write to tanmaya.nirmal@digitalindia.gov.in  and negdcb@digitalindia.gov.in)

Disclaimer

The views and opinions expressed in this blog are those of the author(s) and do not necessarily reflect the official policy or position of NeGD.