|
General Comments on the Draft SGI Amendments
|
-
The Draft SGI Amendments should ensure that the compliance requirements for intermediaries are proportionate to the harms intended to be addressed.
-
Regulatory focus should therefore also be directed towards bad actors who misuse AI tools to create or disseminate malicious content, ensuring that the onus does not fall entirely on intermediaries and the obligations are distributed evenly.
|
Disproportionate Compliance Burdens on Intermediaries
-
The Draft SGI Amendments impose extensive compliance obligations on intermediaries, which appear to be onerous, technically difficult to implement, and not commensurate with the harms intended to be addressed (i.e., harms arising from misinformation, manipulation of elections, financial fraud, reputational damage, and other unlawful content).
-
Rather, the consequence of the Draft SGI Amendments may result in an overreaching framework that may: (i) impose onerous compliance burdens; (ii) increase compliance costs and the risks of non-compliance for intermediaries; and (iii) potentially distort user experience, in a manner that may be disproportionate to the actual extent of the harm sought to be addressed, and may affect the intermediaries’ limited role as conduits of user-generated content.
-
Pertinently, existing laws [namely, the Information Technology Act, 2000 (“IT Act”), the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”) and the Bharatiya Nyaya Sanhita, 2023 (“BNS”)]1 already provide adequate safeguards to prevent and address harms arising from deepfakes, misinformation, and other unlawful content.
-
Accordingly, the focus should be on imposing obligations commensurate with the risk intended to be addressed and also ensuring there are obligations and corresponding liability for other stakeholders involved in the creation and dissemination of malicious content and deepfakes that cause detriment to affected persons (such as the users of the platform, persons being impersonated etc.).
|
|
Rule 2(1)(wa)
|
-
The definition of SGI may be amended to only include images, audio, and video content and exclude text content.
-
The definition of SGI may be modified to recognise that SGI may also be used for purposes which may not be harmful, such as the use of routine image / audio filters, format conversions, and minor editorial adjustments. The treatment of SGI (i.e., the manner of labelling or embedding SGI with permanent unique metadata or identifiers) should vary depending upon the risks or harms posed by such SGI. Therefore, the definition of SGI should account for the same (also see our recommendations at S.No. 3 below).
-
An explanation clarifying the meaning and scope of ‘artificially’ and ‘algorithmically’ generated or altered information may be provided to bring clarity and reduce the scope of information that may be classified as SGI. It may be clarified whether the definition of SGI would cover content generated or modified using AI-based tools or algorithms.
-
The Draft SGI Amendments may also provide for metrics to determine whether information ‘reasonably appears to be authentic or true’.
|
Broad and Ambiguous Scope
-
Extending the definition of SGI to include any information that is ‘artificially’ or ‘algorithmically’ created, generated, modified, or altered using computer resources, risks capturing:
-
‘Artificially’ or ‘algorithmically’ generated or altered content that does not use artificial intelligence. Illustratively:
-
Users creating carousel posts or multi-image posts using tools which use algorithms to arrange the photos and maximize engagement.
-
User created data driven posts that are generated using spreadsheet formulas, programming scripts etc.
-
Minimal or harmless modifications made using AI tools that lack intent to deceive or misinform the users, such as image filters, lighting adjustments, or AI-assisted editing tools.
-
Since it appears that the intent is to regulate content that has been generated or modified using AI tools, explanations or illustrations may be added in the Draft SGI Amendments to clarify that only AI generated content falls under ‘artificially’ or ‘algorithmically’ generated or altered content.
-
Text should be excluded from the definition of SGI for the following reasons:
-
It appears that the intent of the Draft SGI Amendments is to address deepfakes and other synthetic media that visually or audibly impersonate or distort reality and may consequently result in reputational damage, misinformation etc. These harms largely arise from manipulated audio, video, and images rather than text.
-
While text-based content is capable of spreading misinformation, it does not have the same impact as audio, video or images.
-
Extending the definition to text could unintentionally cover a vast amount of automated, AI assisted written content which may be innocuous such as AI generated summaries, which may impose undue compliance burdens on ‘significant social media intermediaries’ (“SSMIs”).
-
Treating all synthetically generated content uniformly risks discouraging innovation and increasing compliance burdens. This may also undermine the purpose of labelling such content, which is to flag truly deceptive or harmful material for the users’ benefit. Uniform labelling requirements for all kinds of SGI, regardless of whether the SGI may pose actual risks of harm could lead to user fatigue and desensitization with respect to such labels, diminishing the effectiveness of such warnings. This could dilute the overall intent of the Draft SGI Amendments.
-
Reduced compliance burdens for uses of computer resources to create, generate, modify or alter information which do not cause any harm would ensure that the focus of the Draft SGI Amendments shall remain on curbing the harms arising from misinformation, manipulation of elections, financial fraud, reputational damage, and other unlawful content as intended.
Providing metrics to determine whether information ‘reasonably appears to be authentic or true’ would reduce subjectivity in the determination of SGI
-
The reliance on whether content ‘reasonably appears to be authentic or true’ to determine whether it is SGI introduces subjectivity in such determination. Since the definition depends on whether content ‘reasonably appears’ to be authentic or true, it allows scope for subjective interpretations and the risk of misclassification based on perception rather than fact.
-
While clear cut parodies may not constitute SGI, satire, adaptations, re-enactments and embellished content may or may not be SGI since the determination as to whether such content ‘reasonably appears to be authentic or true’ may vary from person to person. Accordingly, the onus to determine whether content ‘reasonably appears to be authentic or true’ may be placed on SSMIs who are required to verify user declarations.
-
Further, by classifying (and labelling) content as SGI, based on the test of whether such information ‘reasonably appears to be authentic or true’, it may portray a sense of veracity or truthfulness to the user when in reality, it may not be verified or fact-checked.
-
The inclusion of certain objective metrics, such as requiring for SSMIs to look for model-generated metadata or verifiable technical signatures, to determine whether content constitutes SGI could help reduce this subjectivity. This approach would first require separate regulations that mandate platforms (including Gen AI platforms) that offer users tools to create or modify content to automatically embed model-generated metadata or provenance information for AI generated content. SSMIs could then verify such metadata to determine whether information qualifies as SGI, ensuring a more objective process rather than relying solely on perceived authenticity.
|
|
Rule 3(3)(a)
|
-
Rule 3(3)(a) should apply only to Social Media Intermediaries. Accordingly, the term ‘facilitate’ may be omitted from Rule 3(3)(a). Alternatively, Rule 3(3)(a) should narrowly define the intermediaries that may be considered to be ‘enabling, permitting or facilitating’ the creation of SGI content using their computer resource.
-
The blanket requirement for labelling or embedding SGI with a permanent unique metadata or identifier should be replaced with a risk-based approach.
-
Accordingly, before users post any content which has been generated or modified by a computer resource offered by the intermediary, the user may be required to self-declare whether: (i) the SGI content is high-risk; or (ii) the SGI content is low-risk.
-
The Draft SGI Amendments may provide a list of high-risk SGI content (such as content targeting public figures, content relating to elections, content which may induce users to engage in financial transactions, news content)2 and low-risk SGI content (where AI has been used for cosmetic changes to user images, noise reduction in videos, improving clarity of photos etc.).3
-
The Draft SGI Amendments should allow flexibility for the relevant intermediaries to affix prominent labels for high-risk content, and less prominent labels for low-risk content. The Draft SGI Amendment may also clarify that all content (SGI or otherwise) should be embedded with metadata to ensure traceability of users posting the content to affix liability in the event of any violation of law. Offering the users the option to self-declare and appropriately affixing labels/ identifiers and embedding metadata to the content can be the obligation of the intermediary.
-
Further, rather than prescribing a fixed 10% label / identifier requirement, the Draft SGI Amendments may be amended to stipulate that the label / identifier should be prominent, which will adequately address the intent.
-
The Draft SGI Amendments should allow for intermediaries to have a grace period to develop and introduce such technical measures to label and embed metadata into content.
|
Wide Applicability
-
The words ‘enable, permit, and facilitate’ could apply to any intermediary engaged in the process of generating and transmitting content and may inadvertently cover intermediaries such as cloud storage systems, web hosting servers, content delivery networks, app stores etc. that are not actively engaged in providing a computer resource for the creation of SGI. Accordingly, the word ‘facilitate’ should be deleted, or it should be clarified that this requirement is only applicable to Social Media Intermediaries to ensure alignment with what appears to be intention of the Draft SGI Amendments.
Risk Based Approach
-
Ensuring that the prominent labels are affixed to high-risk content will ensure that the intent of the Draft SGI Amendments i.e., to ensure that content is not weaponised to spread misinformation, damage reputations, manipulate elections, or commit financial fraud, is appropriately addressed. Similar labelling requirements for all forms of content may lead to users being desensitized to such labels and will defeat the purpose of labelling content, i.e., ensuring that users are aware of SGI content that has the potential to spread misinformation, damage reputations, manipulate elections, or commit financial fraud.
-
By requiring users to self-declare the nature of the content (i.e., high-risk or low-risk) the obligations are being more appropriately distributed rather than falling squarely on intermediaries. Further, enabling metadata to be embedded for all content will allow for traceability of users in the event they incorrectly self-declare content. This will allow for authorities to take action where such content violates applicable law, or in less serious instances for intermediaries to take corrective action (such as disabling access of the user to the intermediary platform or permanently banning the user, or any other actions per the intermediaries’ policies).
-
The prescribed 10% label / identifier requirement appears to be arbitrary and may not adequately convey that the content is SGI depending on the nature of the content. For example, for an 8 second audio clip that is SGI, the label / identifier will only be required to be for 0.8 seconds, which will not adequately convey that the audio content is SGI. Alternatively, if an audio clip is 10 minutes long, then the label / identifier will need to be 1 minute long which may be excessive.
-
Per the above, applying a uniform threshold across varying lengths and formats of content may be impractical and could disrupt user experience and may desensitize the users to such identifiers / labels. Accordingly, having a general requirement to ensure that the content has a prominent label / identifier will better address the intent of the Draft SGI Amendments.
Grace Period
-
Intermediaries will need adequate time to ensure that they are able to deploy tools which will enable them to label / add identifiers to SGI content, to meaningfully address their obligations.
|
|
Rule 4(1A)
|
-
The rule may be amended to clarify the ‘reasonable and appropriate technical measures’ to be implemented by SSMIs for verification of user declarations.
-
To ensure feasibility, and appropriate distribution of responsibilities, SSMIs should be required to additionally provide users the option to self-declare whether the SGI content is high-risk (i.e., content targeting public figures, content relating to elections, content which may induce users to engage in financial transactions, news content)4 or low-risk (content where AI has been used for cosmetic changes to user images, noise reduction in videos, improving clarity of photos etc.).5 The Draft SGI Amendments should only require for SSMIs to review content which is declared by users as high-risk rather than all SGI content.
-
The rule should also be amended to clarify that SSMIs will be deemed to have failed their due diligence obligations if they do not promptly label/ remove SGI content upon receiving actual knowledge through a court order or government notification, rather than when the SSMI ‘becomes aware’ or it being otherwise established that the SSMI knowingly permitted, promoted, or failed to act upon SGI.
-
The Draft SGI Amendments should allow for SSMIs to have a grace period to develop and introduce such reasonable and technical measures to obtain and verify self-declarations of the users.
|
Clarity on reasonable and appropriate technical measures
-
Since the Draft SGI Amendments do not specify what constitute reasonable and appropriate technical measures, different SSMIs may implement differing measures which may lead to difficulty in enforcement.
-
Requiring for users to self-declare content as high-risk or low-risk, and limiting the obligation of the SSMI to only verify self-declarations for high-risk content will ensure that there is more appropriate distribution of responsibility to ensure that SGI content does not harm users of the SSMIs.
Loss of safe harbour
-
Reliably detecting SGI on such a large scale poses technical challenges and such detection may not be entirely accurate. While the proviso to Rule 3(1)(b) clarifies that removal of SGI content by intermediaries while exercising reasonable efforts will not result in a loss of safe harbour, failure to detect SGI content may lead to a loss of safe harbour.
-
Given the technical challenges for SSMIs in detecting SGI content, we recommend that the threshold for loss of safe harbour be amended to failure of the SSMIs to label / remove SGI content upon receiving actual knowledge through a court order or government notification. Further, if the threshold is not amended, SSMIs may adopt a conservative approach of over labelling of SGI content, which will lead to user desensitization to such labelling and consequently reduce the impact of the Draft SGI Amendments.
Grace Period
-
SSMIs will need adequate time to ensure that they are able to deploy ‘reasonable and appropriate’ tools which will enable them to detect and label SGI content, to meaningfully address their obligations.
|