AI Tool Copilot Designer Has Tendency To Create ‘Sexually Objectified’ Images: Microsoft Software Engineer
A software engineer from Microsoft recently sent letters to the company’s board, lawmakers and the Federal Trade Commission (FTC) and claimed that the tech giant is not doing enough to safeguard its AI image generation tool from creating abusive and violent content.
The engineer, Shane Jones, said that he found a vulnerability in OpenAI’s latest DALL-E image generator model which allowed him to bypass the safeguards which was supposed to prevent the tool from creating harmful images.
According to a letter sent to the FTC on Wednesday, Jones said that he informed Microsoft about this and ‘repeatedly urged’ the company to “remove Copilot Designer from public use until better safeguards could be put in place,” reported Bloomberg.
The letter read, “While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers. Microsoft Copilot Designer does not include the necessary product warnings or disclosures needed for consumers to be aware of these risks.”
He alleged that Copilot Designer had a tendency to generate an “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” He added that the AI tool created “harmful content in a variety of other categories including political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”
The FTC confirmed that it had indeed received the letter but declined to comment further.
Jones also wrote to Microsoft’s Environmental, Social and Public Policy Committee. In the letter to the Environmental committee, he wrote, “I don’t believe we need to wait for government regulation to ensure we are transparent with consumers about AI risks. Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.”