ReutersNov 20, 2020 00:16:54 IST
By Elizabeth Culliford
(Reuters) – Facebook Inc
The world’s largest social media company, under scrutiny over its policing of abuses, particularly around November’s U.S. presidential election, released the estimate in its quarterly content moderation report.
Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which was proactively identified. It took action on 22.5 million in the previous quarter.
Facebook defines ‘taking action’ as removing content, covering it with a warning, disabling accounts, or escalating it to external agencies.
Its photo-sharing site Instagram took action on 6.5 million pieces of hate speech content, up from 3.2 million in Q2. About 95% of this was proactively identified, a 10% increase from the previous quarter.
This summer, civil rights groups organized a widespread Facebook advertising boycott to try to pressure social media companies to act against hate speech.
In October, Facebook said it was updating its hate speech policy to ban any content that denies or distorts the Holocaust, a turnaround from public comments Facebook’s Chief Executive Mark Zuckerberg had made about what should be allowed on the platform.
Facebook also said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content, up from 3.1 million in the second quarter.
Earlier this week, Zuckerberg and Twitter Inc
Last week, Reuters reported that Zuckerberg told an all-staff meeting that former Trump White House adviser Steve Bannon had not violated enough of the company’s policies to justify suspension when he urged the beheading of two senior U.S. officials.
The company has also been criticized in recent months for allowing rapidly-growing Facebook groups sharing false election claims and violent rhetoric to gain traction.
Facebook said its rates for finding rule-breaking content before users reported it were up in most areas, due to improvements in artificial intelligence tools and expanding its detection technologies to more languages.
In a blog post, Facebook said the COVID-19 pandemic continued to disrupt its content review workforce, though it said some enforcement metrics were returning to pre-pandemic levels.
Facebook reported taking action on 12.4 million pieces of child nudity and sexual exploitation content, up from 9.5 million in the previous quarter.
(Reporting by Elizabeth Culliford; Editing by Nick Zieminski)
This story has not been edited by Firstpost staff and is generated by auto-feed.