NISHITH.TV
  • Mumbai
  • Silicon Valley
  • Bengaluru
  • Singapore
  • Mumbai BKC
  • New Delhi
  • New York
  • GIFT City

Locations

  • Mumbai
  • Silicon Valley
  • Bengaluru
  • Singapore
  • Mumbai BKC
  • New Delhi
  • New York
  • GIFT City
  • Content
  • Home
  • ABOUT US
  • NDA in the Media
  • Areas of Service
  • Research and Articles
  • Opportunities
  • Contact
  • NDACloud
  • Client Access
  • Member Access
  • NDA Connect
    Events and Calendar
  • Proud Moments
    How we perform
  • Nishith TV
    Knowledge anywhere, anytime
  • Deal Corner
    See our recent deals
  • Deal Talk
    Transactional insights unlocked
  • NDA Hotline
    Up to date legal developments
  • M&A Lab
    Case studies in M&A

Research and Articles

HTML

  • Think Tanks
  • Research at NDA
  • Research Papers
  • Research Articles
  • Policy Papers
  • Hotline
  • Imaginarium Ali Gunjan (Global Research Campus)
  • Japan Desk ジャパンデスク

Hotline


  • Capital Markets Hotline
  • Climate Change Related Legal Issues
  • Companies Act Series
  • Competition Law Hotline
  • Corpsec Hotline
  • Court Corner
  • Cross Examination
  • Deal Destination
  • Debt Funding in India Series
  • Dispute Resolution Hotline
  • Education Sector Hotline
  • FEMA Hotline
  • Financial Service Update
  • Food & Beverages Hotline
  • Funds Hotline
  • Gaming Law Wrap
  • GIFT City Express
  • Green Hotline
  • HR Law Hotline
  • iCe Hotline
  • Insolvency and Bankruptcy Hotline
  • International Trade Hotlines
  • Investment Funds: Monthly Digest
  • IP Hotline
  • IP Lab
  • Legal Update
  • Let's Shape the Future of Law Together
  • Lit Corner
  • M&A Disputes Series
  • M&A Hotline
  • M&A Interactive
  • Media Hotline
  • New Publication
  • Other Hotline
  • Pharma & Healthcare Update
  • Press Release
  • Private Client Wrap
  • Private Debt Hotline
  • Private Equity Corner
  • Real Estate Update
  • Realty Check
  • Regulatory Digest
  • Regulatory Hotline
  • Renewable Corner
  • SEZ Hotline
  • Social Sector Hotline
  • Tax Hotline
  • Technology & Tax Series
  • Technology Law Analysis
  • Telecom Hotline
  • The Startups Series
  • White Collar and Investigations Practice
  • Yes, Governance Matters.
  • Japan Desk ジャパンデスク

Technology Law Analysis


Comments to the Government on the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025

February 13, 2026

 

To,

The Ministry of Electronics and Information Technology

Government of India

 

Subject: Comments on the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025

Dear Sir / Madam,

At the outset, we at Nishith Desai Associates express our sincere appreciation to the Ministry of Electronics and Information Technology (“MeitY”) for its active engagement with the industry and for this opportunity to provide our comments and suggestions on the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 relating to synthetically generated information (“Draft SGI Amendments”). 

We recognize and appreciate the Government’s intent to address the misuse of synthetically generated information, particularly the spread of misinformation, manipulation of elections, financial fraud, and reputational damage etc. In line with this objective, it is crucial that the Draft SGI Amendments are drafted with the necessary clarity to avoid ambiguity or scope for multiple interpretations. Clear and consistent provisions will not only promote regulatory certainty but also strengthen India’s position as a jurisdiction aligned with global best practices, especially in advancing business-friendly digital governance. Conversely, ambiguities in the framework could: (i) lead to inconsistent interpretations and compliance practices among intermediaries and regulators; (ii) prompt affected stakeholders to seek judicial intervention on differing grounds, thereby adding to the courts’ caseload; and (iii) necessitate frequent policy interventions and clarifications from the Government, among other consequences.

With reference to our concerns raised above, we have provided our specific rule-wise comments and suggestions on the Draft SGI Amendments.

 

Rule

Suggestions

Rationale

General Comments on the Draft SGI Amendments

  1. The Draft SGI Amendments should ensure that the compliance requirements for intermediaries are proportionate to the harms intended to be addressed.

  2. Regulatory focus should therefore also be directed towards bad actors who misuse AI tools to create or disseminate malicious content, ensuring that the onus does not fall entirely on intermediaries and the obligations are distributed evenly.

Disproportionate Compliance Burdens on Intermediaries

  1. The Draft SGI Amendments impose extensive compliance obligations on intermediaries, which appear to be onerous, technically difficult to implement, and not commensurate with the harms intended to be addressed (i.e., harms arising from misinformation, manipulation of elections, financial fraud, reputational damage, and other unlawful content). 

  2. Rather, the consequence of the Draft SGI Amendments may result in an overreaching framework that may: (i) impose onerous compliance burdens; (ii) increase compliance costs and the risks of non-compliance for intermediaries; and (iii) potentially distort user experience, in a manner that may be disproportionate to the actual extent of the harm sought to be addressed, and may affect the intermediaries’ limited role as conduits of user-generated content.

  3. Pertinently, existing laws [namely, the Information Technology Act, 2000 (“IT Act”), the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”) and the Bharatiya Nyaya Sanhita, 2023 (“BNS”)]1 already provide adequate safeguards to prevent and address harms arising from deepfakes, misinformation, and other unlawful content.

  4. Accordingly, the focus should be on imposing obligations commensurate with the risk intended to be addressed and also ensuring there are obligations and corresponding liability for other stakeholders involved in the creation and dissemination of malicious content and deepfakes that cause detriment to affected persons (such as the users of the platform, persons being impersonated etc.).

Rule 2(1)(wa)

  1. The definition of SGI may be amended to only include images, audio, and video content and exclude text content.

  2. The definition of SGI may be modified to recognise that SGI may also be used for purposes which may not be harmful, such as the use of routine image / audio filters, format conversions, and minor editorial adjustments. The treatment of SGI (i.e., the manner of labelling or embedding SGI with permanent unique metadata or identifiers) should vary depending upon the risks or harms posed by such SGI. Therefore, the definition of SGI should account for the same (also see our recommendations at S.No. 3 below).

  3. An explanation clarifying the meaning and scope of ‘artificially’ and ‘algorithmically’ generated or altered information may be provided to bring clarity and reduce the scope of information that may be classified as SGI. It may be clarified whether the definition of SGI would cover content generated or modified using AI-based tools or algorithms.

  4. The Draft SGI Amendments may also provide for metrics to determine whether information ‘reasonably appears to be authentic or true’.

Broad and Ambiguous Scope

  1. Extending the definition of SGI to include any information that is ‘artificially’ or ‘algorithmically’ created, generated, modified, or altered using computer resources, risks capturing:

    1. ‘Artificially’ or ‘algorithmically’ generated or altered content that does not use artificial intelligence. Illustratively:

      1. Users creating carousel posts or multi-image posts using tools which use algorithms to arrange the photos and maximize engagement.

      2. User created data driven posts that are generated using spreadsheet formulas, programming scripts etc. 

    2. Minimal or harmless modifications made using AI tools that lack intent to deceive or misinform the users, such as image filters, lighting adjustments, or AI-assisted editing tools.

  2. Since it appears that the intent is to regulate content that has been generated or modified using AI tools, explanations or illustrations may be added in the Draft SGI Amendments to clarify that only AI generated content falls under ‘artificially’ or ‘algorithmically’ generated or altered content.

  3. Text should be excluded from the definition of SGI for the following reasons:

    1. It appears that the intent of the Draft SGI Amendments is to address deepfakes and other synthetic media that visually or audibly impersonate or distort reality and may consequently result in reputational damage, misinformation etc. These harms largely arise from manipulated audio, video, and images rather than text.

    2. While text-based content is capable of spreading misinformation, it does not have the same impact as audio, video or images.

    3. Extending the definition to text could unintentionally cover a vast amount of automated, AI assisted written content which may be innocuous such as AI generated summaries, which may impose undue compliance burdens on ‘significant social media intermediaries’ (“SSMIs”).

  4. Treating all synthetically generated content uniformly risks discouraging innovation and increasing compliance burdens. This may also undermine the purpose of labelling such content, which is to flag truly deceptive or harmful material for the users’ benefit. Uniform labelling requirements for all kinds of SGI, regardless of whether the SGI may pose actual risks of harm could lead to user fatigue and desensitization with respect to such labels, diminishing the effectiveness of such warnings. This could dilute the overall intent of the Draft SGI Amendments.

  5. Reduced compliance burdens for uses of computer resources to create, generate, modify or alter information which do not cause any harm would ensure that the focus of the Draft SGI Amendments shall remain on curbing the harms arising from misinformation, manipulation of elections, financial fraud, reputational damage, and other unlawful content as intended.

Providing metrics to determine whether information ‘reasonably appears to be authentic or true’ would reduce subjectivity in the determination of SGI

  1. The reliance on whether content ‘reasonably appears to be authentic or true’ to determine whether it is SGI introduces subjectivity in such determination. Since the definition depends on whether content ‘reasonably appears’ to be authentic or true, it allows scope for subjective interpretations and the risk of misclassification based on perception rather than fact.

  2. While clear cut parodies may not constitute SGI, satire, adaptations, re-enactments and embellished content may or may not be SGI since the determination as to whether such content ‘reasonably appears to be authentic or true’ may vary from person to person. Accordingly, the onus to determine whether content ‘reasonably appears to be authentic or true’ may be placed on SSMIs who are required to verify user declarations.

  3. Further, by classifying (and labelling) content as SGI, based on the test of whether such information ‘reasonably appears to be authentic or true’, it may portray a sense of veracity or truthfulness to the user when in reality, it may not be verified or fact-checked.

  4. The inclusion of certain objective metrics, such as requiring for SSMIs to look for model-generated metadata or verifiable technical signatures, to determine whether content constitutes SGI could help reduce this subjectivity. This approach would first require separate regulations that mandate platforms (including Gen AI platforms) that offer users tools to create or modify content to automatically embed model-generated metadata or provenance information for AI generated content. SSMIs could then verify such metadata to determine whether information qualifies as SGI, ensuring a more objective process rather than relying solely on perceived authenticity.

Rule 3(3)(a)

  1. Rule 3(3)(a) should apply only to Social Media Intermediaries. Accordingly, the term ‘facilitate’ may be omitted from Rule 3(3)(a). Alternatively, Rule 3(3)(a) should narrowly define the intermediaries that may be considered to be ‘enabling, permitting or facilitating’ the creation of SGI content using their computer resource.

  2. The blanket requirement for labelling or embedding SGI with a permanent unique metadata or identifier should be replaced with a risk-based approach.

  3. Accordingly, before users post any content which has been generated or modified by a computer resource offered by the intermediary, the user may be required to self-declare whether: (i) the SGI content is high-risk; or (ii) the SGI content is low-risk.

  4. The Draft SGI Amendments may provide a list of high-risk SGI content (such as content targeting public figures, content relating to elections, content which may induce users to engage in financial transactions, news content)2 and low-risk SGI content (where AI has been used for cosmetic changes to user images, noise reduction in videos, improving clarity of photos etc.).3

  5. The Draft SGI Amendments should allow flexibility for the relevant intermediaries to affix prominent labels for high-risk content, and less prominent labels for low-risk content. The Draft SGI Amendment may also clarify that all content (SGI or otherwise) should be embedded with metadata to ensure traceability of users posting the content to affix liability in the event of any violation of law. Offering the users the option to self-declare and appropriately affixing labels/ identifiers and embedding metadata to the content can be the obligation of the intermediary.

  6. Further, rather than prescribing a fixed 10% label / identifier requirement, the Draft SGI Amendments may be amended to stipulate that the label / identifier should be prominent, which will adequately address the intent.

  7. The Draft SGI Amendments should allow for intermediaries to have a grace period to develop and introduce such technical measures to label and embed metadata into content.

Wide Applicability

  1. The words ‘enable, permit, and facilitate’ could apply to any intermediary engaged in the process of generating and transmitting content and may inadvertently cover intermediaries such as cloud storage systems, web hosting servers, content delivery networks, app stores etc. that are not actively engaged in providing a computer resource for the creation of SGI.  Accordingly, the word ‘facilitate’ should be deleted, or it should be clarified that this requirement is only applicable to Social Media Intermediaries to ensure alignment with what appears to be intention of the Draft SGI Amendments.

Risk Based Approach

  1. Ensuring that the prominent labels are affixed to high-risk content will ensure that the intent of the Draft SGI Amendments i.e., to ensure that content is not weaponised to spread misinformation, damage reputations, manipulate elections, or commit financial fraud, is appropriately addressed. Similar labelling requirements for all forms of content may lead to users being desensitized to such labels and will defeat the purpose of labelling content, i.e., ensuring that users are aware of SGI content that has the potential to spread misinformation, damage reputations, manipulate elections, or commit financial fraud.

  2. By requiring users to self-declare the nature of the content (i.e., high-risk or low-risk) the obligations are being more appropriately distributed rather than falling squarely on intermediaries. Further, enabling metadata to be embedded for all content will allow for traceability of users in the event they incorrectly self-declare content. This will allow for authorities to take action where such content violates applicable law, or in less serious instances for intermediaries to take corrective action (such as disabling access of the user to the intermediary platform or permanently banning the user, or any other actions per the intermediaries’ policies).

  3. The prescribed 10% label / identifier requirement appears to be arbitrary and may not adequately convey that the content is SGI depending on the nature of the content. For example, for an 8 second audio clip that is SGI, the label / identifier will only be required to be for 0.8 seconds, which will not adequately convey that the audio content is SGI. Alternatively, if an audio clip is 10 minutes long, then the label / identifier will need to be 1 minute long which may be excessive.

  4. Per the above, applying a uniform threshold across varying lengths and formats of content may be impractical and could disrupt user experience and may desensitize the users to such identifiers / labels. Accordingly, having a general requirement to ensure that the content has a prominent label / identifier will better address the intent of the Draft SGI Amendments.

Grace Period

  1. Intermediaries will need adequate time to ensure that they are able to deploy tools which will enable them to label / add identifiers to SGI content, to meaningfully address their obligations.

Rule 4(1A)

  1. The rule may be amended to clarify the ‘reasonable and appropriate technical measures’ to be implemented by SSMIs for verification of user declarations.

  2. To ensure feasibility, and appropriate distribution of responsibilities, SSMIs should be required to additionally provide users the option to self-declare whether the SGI content is high-risk (i.e., content targeting public figures, content relating to elections, content which may induce users to engage in financial transactions, news content)4 or low-risk (content where AI has been used for cosmetic changes to user images, noise reduction in videos, improving clarity of photos etc.).5 The Draft SGI Amendments should only require for SSMIs to review content which is declared by users as high-risk rather than all SGI content.

  3. The rule should also be amended to clarify that SSMIs will be deemed to have failed their due diligence obligations if they do not promptly label/ remove SGI content upon receiving actual knowledge through a court order or government notification, rather than when the SSMI ‘becomes aware’ or it being otherwise established that the SSMI knowingly permitted, promoted, or failed to act upon SGI.

  4. The Draft SGI Amendments should allow for SSMIs to have a grace period to develop and introduce such reasonable and technical measures to obtain and verify self-declarations of the users.

Clarity on reasonable and appropriate technical measures

  1. Since the Draft SGI Amendments do not specify what constitute reasonable and appropriate technical measures, different SSMIs may implement differing measures which may lead to difficulty in enforcement.

  2. Requiring for users to self-declare content as high-risk or low-risk, and limiting the obligation of the SSMI to only verify self-declarations for high-risk content will ensure that there is more appropriate distribution of responsibility to ensure that SGI content does not harm users of the SSMIs.

Loss of safe harbour

  1. Reliably detecting SGI on such a large scale poses technical challenges and such detection may not be entirely accurate. While the proviso to Rule 3(1)(b) clarifies that removal of SGI content by intermediaries while exercising reasonable efforts will not result in a loss of safe harbour, failure to detect SGI content may lead to a loss of safe harbour.

  2. Given the technical challenges for SSMIs in detecting SGI content, we recommend that the threshold for loss of safe harbour be amended to failure of the SSMIs to label / remove SGI content upon receiving actual knowledge through a court order or government notification. Further, if the threshold is not amended, SSMIs may adopt a conservative approach of over labelling of SGI content, which will lead to user desensitization to such labelling and consequently reduce the impact of the Draft SGI Amendments.

Grace Period

  1. SSMIs will need adequate time to ensure that they are able to deploy ‘reasonable and appropriate’ tools which will enable them to detect and label SGI content, to meaningfully address their obligations.

 

 Tech Team
You can direct your queries or comments to the author.

1Illustratively, under the BNS, Section 353 criminalizes the making, publishing, or circulation of false information, statements, rumours, or reports (including in electronic form) with intent or likelihood of causing public harm, promoting enmity, or disturbing public tranquillity; Section 356 penalizes defamation; Section 197(1)(d) penalizes the creation or publication of false or misleading information prejudicial to national integration, sovereignty, or security and Section 294 addresses offences relating to obscenity.

Under the IT Act, Section 66-D prescribes punishment for cheating by personation using a computer resource, while Sections 67, 67A, and 67B provide for punishment for transmitting obscene material, material containing sexually explicit acts, and material depicting children in sexually explicit acts, respectively in electronic form.

Under the IT Rules, 2021, Rule 3(1)(b) requires intermediaries to make reasonable efforts by themselves, and to cause their users to not host, display, upload, modify, publish, transmit, or share information that is false, misleading, or unlawful.

2This is only illustrative.

3This is only illustrative.

4This is only illustrative.

5This is only illustrative.

Mission and Vision


Distinctly Different

What's New


Trust issues in gifts to relatives: ITAT Clarifies the Scope of Section 56(2)(x)
Tax Hotline: March 06, 2026
Streaming Without Barriers: Ministry of Information and Broadcasting Releases OTT Accessibility Guidelines
Technology Law Analysis: March 05, 2026

Events


Webinar

Decoding India’s 2026 Budget & Global Competitiveness
February 04, 2026

Seminar

Exclusive Lunch Dialogue with New Jersey Governor Phil Murphy
September 24, 2025

Round Table

From Traction to Transaction: Bridging the Gap – Co-creating the Next Era of Innovation, Investment & Global Leadership hosted by Primus Partners and in partnership with the Meridian International Centre - iCET x Deeptech in Defense: How and When?
October 29, 2025

News Roundup


News Articles

Nishith Desai Launches Boston Desk, Third U.S Branch
November 25,2025

Quotes

Multiple IPOs Likely To Hit GIFT City Stock Exchanges This Year
August 26,2025

Newsletters


Tax Hotline

Trust issues in gifts to relatives: ITAT Clarifies the Scope of Section 56(2)(x)
March 06, 2026


Technology Law Analysis

Streaming Without Barriers: Ministry of Information and Broadcasting Releases OTT Accessibility Guidelines
March 05, 2026


M&A Hotline

When Speed Meets Structure: Practical Challenges in Implementing Corporate Restructurings under Companies Act, 2013
March 04, 2026


  • Disclaimer
  • Content
  • Feedback
  • Walkthrough
  • Subscribe
Nishith Desai Associates ©2026 All rights reserved.