Blog

You are here:Home BLOG Facebook Close 583m Fake Accounts With AI-based Tool
Rate this item
(0 votes)

Facebook Close 583m Fake Accounts With AI-based Tool

By Published May 16, 2018

Following the data breach scandal involving Facebook and Cambridge Analytical, Facebook founder, Mark Zuckerberg made good his promise to clean up the social media platform’s act by closing 583 million accounts.

The affected accounts includes 1.5bn accounts and posts that violated its community standards in the first three months of 2018, the company has revealed.

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months while it moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.

The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on infringing material, and the sheer quantity of items posted by users. For instance, Alex Schultz, the company’s vice-president of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-on-quarter.

Several categories of violating content outlined in Facebook’s moderation guidelines – including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement – are not included in the report.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious. Those tools worked particularly well for content such as fake accounts and spam: the company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and “nearly 100%” of the spam.

Facebook’s moderation figures come a week after the release of the Santa Clara Principles, an attempt to write a guidebook for how large platforms should moderate content. The principles state that social networks should publish the number of posts they remove, provide detailed information for users whose content is deleted, explaining why, and offer the chance to appeal against the decision.

Read 110 times