Skip to main content

Facebook answers questions on response to hate speech and moderation

16 March 2020

The Committee will ask Facebook how the way use data can be made more understandable to the public and accessible to researchers, whether it will take responsibility, including financial responsibility, if it consistently promotes content that is inaccurate or contains hate speech, and the role and effectiveness of the content moderation process.

Witnesses

Tuesday 17 March in Committee Room 2, Palace of Westminster

At 10.30am

  •  Karim Palant, UK Public Policy Manager, Facebook

Possible areas for discussion

Would Facebook accept that it should be fined if it consistently promoted content in the news feed containing misinformation and hate speech that had already received a large number of views? If not, what level of regulation would Facebook actually support?

What is the role of human moderation in improving online experiences? How can these processes be made more transparent and consistent? Has Facebook considered creating a public database of anonymised archetypes based on its content moderation decisions, and developing a system of precedents to ensure greater equity in decision making? Why has it decided that the decisions of its oversight board should not create binding precedents?

What aspect of its Third-Party Fact Checkers checking politicians' content does Facebook object to? Is it the adding of a link to a fact check, the overlay indicating that it is false, or the associated down ranking? Has Facebook considered allowing fact checks of politicians with fewer of these features?

Further information