Twitter Bans Dehumanising Tweets on The Basis of Age, Disability or Disease

Disease is important as the novel coronavirus is spreading across the globe and people are sharing all kinds of information, including jokes, videos, memes and GIFs related to certain communities which can hurt their sentiments.

Twitter Bans Dehumanising Tweets on The Basis of Age, Disability or Disease (Photo Credits: IANS)

San Francisco, March 6: Twitter has expanded its rules around hate speech to include language that dehumanises people on the basis of their age, disability or disease. Last year, the micro-blogging platform updated its 'Hateful Conduct' policy to address dehumanising speech, starting with one protected category: religious groups. Twitter Stories Called Fleets To Be Introduced Soon in Brazil.

"Our primary focus is on addressing the risks of offline harm, and research shows that dehumanising language increases that risk. As a result, we expanded our rules against hateful conduct to include language that dehumanises others on the basis of religion. "Today, we are further expanding this rule to include language that dehumanizes on the basis of age, disability or disease," Twitter said in a statement on Thursday.

Disease is important as the novel coronavirus is spreading across the globe and people are sharing all kinds of information, including jokes, videos, Memes and GIFs related to certain communities which can hurt their sentiments.

"Tweets that break this rule pertaining to age, disease and/or disability, sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was in place," said the company.

In 2018, Twitter asked for feedback to hear directly from the different communities and cultures. In two weeks, it received more than 8,000 responses from people located in more than 30 countries. Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered.

Respondents said that "identifiable groups" was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalised groups with this type of language. Many people wanted to "call out hate groups in any way, any time, without fear".

In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as "kittens" and "monsters".

"We are continuing to learn as we expand to additional categories," said Twitter, adding that it has developed a global working group of outside experts to help how it should address dehumanising speech around more complex categories like race, ethnicity and national origin.

(The above story first appeared on LatestLY on Mar 06, 2020 10:34 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now