Centre U-turn on plan for government permit requirement for ‘under-tested’ AI models

Amidst criticism, the Modi government has retracted its advisory on use of artificial intelligence (AI), changing a provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country.

In the latest advisory issued last evening, the government has asked firms to label under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.

It highlights concerns over the negligence of intermediaries and platforms in implementing due diligence, as mandated by the existing IT Rules.

“The advisory is issued in suppression of advisory…dated March 1, 2024,” it said.

It has been observed that IT firms and platforms are often negligent in undertaking due diligence obligations underlined under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, according to the new advisory.

“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled… that such information has been created generated or modified using the computer resource of the intermediary,” the advisory said.

In case any changes are made by the user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change, it added.

The advisory said every intermediary and platform is now mandated to ensure that AI-generated content, especially content susceptible to deepfake manipulation, is labelled accordingly.

Platforms are instructed to deploy AI models that prevent users from posting or sharing unlawful content. MEITY emphasises the need for platforms to ensure that AI models deployed do not exhibit bias and refrain from interfering with electoral processes.

Untested AI models are to be deployed only after proper labelling and disclosure to users, informing them of potential inaccuracies in results.

Platforms are required to inform users of the consequences associated with posting unlawful information, further promoting responsible online behaviour and content sharing practices.

After a controversy over a response of Google’s AI platform to queries related to Prime Minister Narendra Modi, the government had on March 1 issued an advisory for social media and other platforms.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *