Facebook Disabled 583 million Fake Accounts In Q1 2018- Its First Transparency Report

1453 0
1453 0

Facebook took action on 1.9 million pieces of terrorist propaganda content and disabled 583 million fake accounts in Q1 2018. These revelations are part of Facebook’s first report on Community Standards Enforcement.

The report covers Facebook’s enforcement efforts between October 2017 to March 2018. It covers six broad areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.

The number of pieces of terrorist propaganda content related to Al-Qaeda and ISIS was up by 0.8 million in Q1 2018 from 1.1 million in Q4 2017. Facebook attributed this increase to the platform’s improvements in its ability to find violating content using photo detection technology, which detects both old content and newly posted content.

The social media giant said that the number of views of terrorist propaganda content is extremely low on Facebook. “That’s because there’s relatively little of it and because we remove the majority before people see it,” Facebook said. “Therefore, the sampling methodology we use to calculate prevalence can’t reliably estimate how much of this content is viewed on Facebook. We’re exploring other methods for estimating this metric,” the company further explained in the report.

Source: LancerVJ

Fake accounts represented approximately 3% to 4% of monthly active users (MAU) on Facebook during Q1 2018 and Q4 2017. In Q1 2018, Facebook disabled 583 million fake accounts, down from 694 million in Q4 2017. The company said that the decrease in fake accounts disabled between Q4 and Q1 is largely due to the variation in cyberattacks and the variability of Facebook’s detection technology’s ability to find and flag them.

In terms of government requests for user data, the global increase led to 82,341 requests in the second half of 2017, up from 78,890 during the first half of the year. U.S. requests stayed roughly the same at 32,742; though 62 percent included a non-disclosure clause that prohibited Facebook from alerting the user. Requests from the Indian government for details of users or accounts were the second highest at 17262.

Graphic Violence:

Posts that included graphic violence represented 0.22% to 0.27% of views in Q1 2018, up from an estimated 0.16% to 0.19% in Q4 2017. In Q1 2018, Facebook took action on a total of 3.4 million pieces of graphic violence content, an increase from 1.2 million pieces of content in Q4 2017. “This increase is mostly due to improvements in our detection technology, including using photo-matching to cover with warnings photos that matched ones we previously marked as disturbing. These actions were responsible for around 70% of the increase in Q1,” said Facebook.

Nudity and Sexual Activity:

21 million pieces of content that included adult nudity or sexual activity were taken down in Q1 2018. Facebook found and flagged around 96% of such content before users reported them. According to TechCrunch, when it took down the newsworthy “Napalm Girl” historical photo because it contained child nudity, before realizing the mistake and restoring it.

Hate Speech:

Facebook took action on around 2.5 million pieces of hate speech content, up from around 1.6 million in Q4 2017. Of this 38% of the posts were found and flagged by Facebook. It has also been more recently criticized for contributing to Myanmar violence, as extremists’ hate speech-filled posts incited violence.

Spam:

Source: Pinterest

In Q1 2018, the company took action on 837 million pieces of content, up from 727 million in Q4 2017. All of the spam was found and reported by Facebook itself.

This increased transparency on the part of Facebook steams from its belief that transparency “tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too,” said Guy Rosen, VP of Product Management, in a blogpost.

Facebook’s report follows the first ever publication of the internal guidelines that the platform uses to enforce community standards.



In this article

Join the Conversation

ten + nineteen =