Microsoft’s AI Tool Copilot Designer Has Tendency To Create ‘Inappropriate’ and ‘Sexually Objectified’ Images, Says Report
Microsoft's software engineer said that the Copilot Designer AI tool created abusive, violent and sexually objective images. He also sent letter to the US FTC, informed Microsoft and lawmakers claiming that Microsoft is not doing enough to safeguard AI image generation.
Mumbai, March 7: A software engineer from Microsoft recently sent a letter to the company's board, Federal Trade Commission and lawmakers regarding the AI Image generation tool. The software engineer claimed that Microsoft needs to do more to safeguard its "AI image generation tool from creating abusive and violent content. Recently, Google paused its image generation capability due to facing severe criticism over wrong images of the founding fathers of the United States, promoting racism and also wokeims.
Now, the popular AI Chabot Copilot has been in the news reports related to sexually objectifying, abusive and violent content. According to a report by ABP Live, Microsoft's software engineer Shane Jones discovered a vulnerability in OpenAI's new DALL-E image generator model. It allowed him to bypass the safeguards that were supposed to prevent it from generating harmful images. OpenAI To Become ClosedAI? Elon Musk Says He Will Drop Lawsuit Against OpenAI if They Change Its Name to ClosedAI.
In his letter to FTC, he reportedly said that he informed and urged the company repeatedly to remove Copilot Designer from public use until better safeguards were placed. He further said that Microsoft has been marketing Copilot Designer as a safe AI tool for everyone, including kids of any age. The report said that he also added that Microsoft is internally aware of these issues related to harmful image creation, which might be offensive to some users. Elon Musk To Remove Likes and Reposts From X Timeline, Users Say ‘Excessively Stupid’ and Will ‘Greatly Reduce Engagement’.
He further added that the Copilot Designer has a tendency to generate inappropriate, sexually offensive image of woman. The AI tool had reportedly created other harmful images related to political bias, drug use, underage drinking, conspiracy theories, religion and misuse of corporate trademarks and copyrights. The report also mentioned that FTC confirmed receiving such a letter but declined to comment, and Microsoft indirectly admitted the same issues a week ago.
(The above story first appeared on LatestLY on Mar 07, 2024 05:26 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).