AI Models Need To Get Indian Govt Approval Before Deployment
The Ministry of Electronics and Information Technology (MeitY) has released an advisory directing AI platforms to seek its approval before deploying any unreliable or under-testing artificial intelligence (AI) models publicly.
The advisory covers all AI, generative AI models, large language models, and algorithms that are currently under testing or considered unreliable.
It requires these models to get explicit permission from MeitY before being offered to Indian users. The advisory also asks platforms to appropriately label possible inaccuracies in model outputs and get user consent, potentially via popups. It warns that a law may be enacted if the advisory is not followed.
The key directives instruct AI providers to suitably inform users about inherent biases or inaccuracies in model outputs. This can be done via consent popups and clear labelling stating limitations. Additionally, models should not further any discrimination or threaten election integrity. Failing to embed deepfakes and synthetic media with metadata for identification has also been highlighted.
Connection With Google Gemini?
This significant move comes from recent controversies around biases exhibited by AI apps like Google’s Gemini. Screenshots shared on the social media platform X reveal that when a user requested Gemini to display a German soldier in 1943, the tool produced an array of racially diverse soldiers donning German military uniforms from that era.
Similarly, when prompted for a “historically accurate depiction of a medieval British king,” the model generated another diverse selection of images, including one portraying a female ruler, as shown in screenshots. Not only this, users witnessed inaccuracies and racism in the images generated by Google’s Gemini image-creating tool.
The recent advisory seems to be a result of growing instances of AI biases towards race, gender, and more. However, MeitY had been exploring formulations around accountable AI regulations even before this incident.