Vital to remain vigilant about deepfakes in global election year, says Wipro’s Global Privacy Officer

Amidst the backdrop of over 60 nations, India included, gearing up for elections this year, Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro, stresses the imperative of remaining vigilant towards evolving trends in the dynamic digital realm, particularly deepfakes.

“Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did,” Bartoletti emphasised, who is also the founder of the ‘Women Leading in AI Network’.

The repercussions transcend the digital domain, as online disinformation and coordination can manifest into real-world violence.

In India, the government has issued a revision to its AI advisory, asserting that major digital corporations no longer require governmental approval prior to launching any AI model within the nation.

Nevertheless, these tech giants are urged to tag “under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.”

“Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative AI, software(s), or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” as per the new MeitY advisory.

All intermediaries or platforms are mandated to ensure that the utilisation of AI models /LLM/Generative AI, software, or algorithms “does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act.”

The digital platforms are directed to comply with the new AI guidelines immediately.

According to Bartoletti, to ensure public safety, companies must shoulder responsibility and enact measures to combat deepfakes and disinformation.

“This includes investing in advanced detection technologies to identify and flag deepfake content, as well as collaborating with experts to develop effective debunking methods,” she noted.

Furthermore, fostering media literacy and critical thinking among the populace is paramount. “By taking proactive steps to address the risks of deepfakes, we can protect the integrity of elections and uphold the democratic process,” Bartoletti concluded.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *