Artificial Intelligence Powered Writing Tools Can Be Biased, Sway Opinions, Help Spread Misinformation Like Social Media, Warns Researchers

Just as social media can facilitate spread of misinformation, Artificial Intelligence (AI)-powered writing assistants that autocomplete sentences or offer "smart replies" can be biased and produce shifts in opinion, and hence can be misused, warned researchers calling for more regulation.

Artificial Intelligence Representational Image (Photo Credits: File Photo)

New York, May 18 : Just as social media can facilitate spread of misinformation, Artificial Intelligence (AI)-powered writing assistants that autocomplete sentences or offer "smart replies" can be biased and produce shifts in opinion, and hence can be misused, warned researchers calling for more regulation.

Researchers from Cornell University in the US said the biases baked into AI writing tools -- whether intentional or unintentional -- could have concerning repercussions for culture and politics. Fake ChatGPT Frauds: Fraudulent Apps That Resemble OpenAI’s Chatbot Exploiting Users, Earning Thousands of Dollars Every Month, Says Report.

To probe, Maurice Jakesch, a doctoral student in the field of information science from the varsity, asked more than 1,500 participants to write a paragraph answering the question, "Is social media good for society?"

People who used an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI's help. Google Workspace Individual Plan Launched in 20 New Countries To Support Small Businesses.

"The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies," Jakesch said, "the more careful we might want to be about how we're governing the values, priorities and opinions built into them."

These technologies deserve more public discussion regarding how they could be misused and how they should be monitored and regulated, the researchers said. Jakesch presented the study at the 2023 CHI Conference on Human Factors in Computing Systems in April.

Further, the team found that the survey revealed that a majority of the participants did not even notice the AI was biased and didn't realise they were being influenced. When repeating the experiment with a different topic, the research team again saw that participants were swayed by the assistants.

"We're rushing to implement these AI models in all walks of life, but we need to better understand the implications," said Mor Naaman, Professor at the Jacobs Technion-Cornell Institute at Cornell Tech.

"Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society -- shifts in language and opinions," Naaman added.

(The above story first appeared on LatestLY on May 18, 2023 08:16 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now