The government has proposed that netizens using artificial intelligence (AI) to generate and upload content on the internet or any social media platform must make a declaration. The objective is that any use of AI should be spelt out clearly.
“In Parliament and in several other forums, there have been calls for action against deepfakes, which are causing harm to society. People are using images of prominent individuals to create deepfakes that intrude on their personal lives, violate their privacy, and spread misconceptions,” Union Electronics and Information Technology Minister Ashwini Vaishnaw said on Wednesday.
In a proposed amendment to the Information Technology (I-T) Rules of 2021, the Ministry of Electronics and Information Technology (Meity) has said all internet intermediaries allowing the use of AI to generate content must “ensure that every such information is prominently labelled or embedded with a permanent unique metadata or identifier”.
The intermediaries should also have the necessary tools and technical measures to verify the accuracy of the declaration made by users. Further, the declaration of AI usage for content generation must be “prominently displayed”, the ministry has proposed. Stakeholders have been asked to submit their views by November 6.
The government will not set technical standards for the tools used by either users or social media and internet intermediaries to label AI-generated content. However, it will “proceed on the assumption that reasonable and proportionate technical measures have been taken to verify the accuracy of user declarations and to ensure that no synthetically generated information is published without such declaration or label.”
Further, all such labels and disclaimers for AI-generated content must cover at least 10 per cent of the content’s visible area. For AI-generated content with an audio component, the label or disclaimer should appear within the first 10 per cent of the content’s duration.
Intermediaries should not enable any tools that allow for the “modification, suppression or removal of such label, permanent unique metadata or identifier”, the draft amendment has proposed.
Explaining the rationale behind the amendments, the ministry said there had been several examples lately of “deepfake audio, videos and synthetic media going viral on social platforms”, which had demonstrated the potential of generative AI to create compelling content.
“Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud,” the ministry said in an explanatory post on the rationale behind the proposed amendment.
Social media and internet intermediaries failing to label AI-generated textual, image, audio, or video content, or knowingly allowing such content on their platforms without adequate disclaimers “shall be deemed to have failed to exercise due diligence”.
This could, in turn, result in the intermediary losing its safe-harbour protection under Section 79 of the I-T Act, 2000, a senior government official said.
“That will happen if such AI-generated content is found to fall afoul of the existing rules, either under the I-T Act of 2000 or the IT Rules of 2021, and the platform, despite being made aware of the nature of the content, either fails to label it or willingly permits such content to exist,” the official said.
