For the first time, Facebook has revealed how it polices fake accounts, and works at targeting offensive and hateful content posted on the social network.

In its  Community Standards Enforcement Preliminary Report, released Tuesday, Facebook has shared how much content has been removed for violating standards. Its enforcement efforts between October 2017 and March 2018 include six categories: graphic violence; adult nudity and sexual activity; terrorist propaganda; hate speech; spam; and fake accounts.

In the report, Facebook said the overwhelming majority of action taken was against spam posts and fake accounts: it took action on 837million pieces of spam and shut down a further 583million fake accounts on the site in the last three months. But Facebook also moderated 2.5million pieces of hate speech, 1.9million pieces of terrorist propaganda, 3.4million pieces of graphic violence and 21million pieces of content featuring adult nudity and sexual activity. As of the first quarter of 2018, Facebook had 2.19 billion monthly active users.

Facebook is using Artificial Intelligence(AI) to aid its content moderators in identifying content violating its guidelines. A little over 85 percent of the 3.4 million posts containing graphic violence that Facebook acted on in the first quarter got flagged by AI. Facebook CEO Mark Zuckerberg addressed the transparency report directly in a post to his Facebook page Tuesday. "AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we're working on it," Zuckerberg wrote.

The company also announced measures that require political advertisers to undergo an authentication process and reveal their affiliation alongside their advertisements.

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, although as the company was clear to highlight, its new methods are evolving and aren't set in stone. CNET, a technology website notes that the response to extreme content on Facebook is particularly important amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda. Most recently, the scandal involving digital consultancy Cambridge Analytica, which improperly accessed the data of up to 87 million Facebook users, has put the social media company's content moderation into the spotlight.

(The above story first appeared on LatestLY on May 16, 2018 01:11 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).