Alarm Bells Ring as AI-Generated From OpenAI, Microsoft Image Tools Fuel Election Misinformation Scandal

Artificial Intelligence (AI) image creation tools from OpenAI and Microsoft Corp (NASDAQ:MSFT) can generate photos that could cater to election or voting-related disinformation, according to a recent report by the Center for Countering Digital Hate (CCDH).

 

The CCDH, a nonprofit organization dedicated to monitoring online hate speech, utilized these generative AI tools to fabricate images depicting U.S. President Joe Biden in a hospital bed and election workers destroying voting machines.

Such visuals raise concerns about the proliferation of falsehoods in the lead-up to the U.S. presidential election in November, Reuters reports.

The report emphasizes the risk that AI-generated images, perceived as “photo evidence,” pose to the integrity of elections due to their potential to amplify false claims.

The CCDH tested several platforms, including OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, Midjourney, and Stability AI’s DreamStudio, finding that these tools successfully generated images in 41% of attempts.

The tests showed a higher susceptibility to prompts asking for photos depicting election fraud, like discarded voting ballots, than those requesting images of political figures like Biden or former President Donald Trump.

While ChatGPT Plus and Image Creator effectively blocked all attempts to create images of candidates, Midjourney produced misleading images in 65% of tests.

In response to these findings, Midjourney’s founder, David Holz, announced that updates related to the U.S. election are forthcoming, highlighting an improvement in moderation practices.

Stability AI also revised its policies to forbid the creation or promotion of disinformation and fraud.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *