Seek Nod Before Launching AI Models in India: Centre to Social Media Platforms

The Union Ministry of Electronics and Information Technology (MeitY) has issued a second advisory to platforms or intermediaries, asking them to seek explicit permisison from the Centre before launching Artificial Intelligence (AI) models, under testing, in the country.

Artificial Intelligence (Representative Image)

New Delhi, March 3: The Union Ministry of Electronics and Information Technology (MeitY) has issued a second advisory to platforms or intermediaries, asking them to seek explicit permisison from the Centre before launching Artificial Intelligence (AI) models, under testing, in the country. The advisory was issued on Friday evening, more than two months after the ministry issued an advisory in December last year to social media platforms, directing them to follow existing IT rules to deal with the issue of deepfakes.

"The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, 'consent popup' mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated," the advisory read. AI Models That Respect Creators’ Rights Get New Certification Label From Nonprofit Group ‘Fairly Trained’: Report

The advisory added that it recently came to the notice of the ministry that intermediaries or platforms are failing to undertake due-diligence obligations outlined under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules). "All intermediaries or platforms (are) to ensure that use of Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act," it stated.

"All intermediaries or platforms (are) to ensure that their computer resource do not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s)," the advisory read.

"The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, 'consent popup' mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated," it stipulated further. Google Introduces New Open Source ‘Gemma’ AI Model Built With Same Research and Technology, Now Available Worldwide in 2B and 7B Sizes

"All users must be clearly informed including through the terms of services and user agreements of the intermediary or platforms about the consequence of dealing with the unlawful information on its platform, including disabling of access to or removal of non-compliant information, suspension or termination of access or usage rights of the user to their user account, as the case may be, and punishment under applicable law," it added.

"Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created, generated, or modified through its software or any other computer resource is labeled or embedded with a permanent unique metadata or identifier, by whatever name called, in a manner that such label, metadata or identifier can be used to identify that such information has been created, generated or modified using computer resource of the intermediary, or identify the user of the software or such other computer resource, the intermediary through whose software or such other computer resource such information has been created, generated or modified and the creator or first originator of such misinformation or deepfake," the advisory added.

"It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statues of the criminal code," the advisory added. "All intermediaries are, hereby requested to ensure compliance with the above with immediate effect and to submit an Action Taken-cum-Status Report to the Ministry within 15 days of this advisory," the Union ministry added.

(This is an unedited and auto-generated story from Syndicated News feed, LatestLY Staff may not have modified or edited the content body)

Share Now

Share Now