LONDON: Facebook has had community standards and a dedicated safety team for years now, but has previously been reluctant to share exactly how guidelines are enforced and how much content is actually removed.
Today (15 May), the company released figures around the removal of content for the first time in an unprecedented demonstration of transparency.
FB Vice president of product management, Guy Rosen, said: “We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too.
“This is the same data we use to measure our progress internally – and you can now see it to judge our progress for yourselves.”
837m pieces of spam were removed in the first quarter of 2018 alone, nearly 100pc of which Facebook says was flagged before any users reported it. 583m fake accounts were disabled in Q1 of this year, in addition to the millions of registration attempts prevented by the company. During this time, 3 to 4pc of accounts on the site during this period were still fake.
21m pieces of adult nudity and sexual activity were pulled in Q1, 96pc of which the company said was flagged by technology before it was reported. Facebook estimates that out of every 10,000 pieces of content viewed, seven to nine of those were of content that violated its pornography and nudity rules.
Rosen noted that the technology to spot hate speech is still not up to scratch, so content review teams check for instances. 2.5m pieces of hate speech were removed in Q1 of 2018, 38pc of which was flagged by technology.