Facebook answers questions on response to hate speech and moderation
16 March 2020
- Parliament TV: Democracy and Digital Technologies Committee
- Democracy and Digital Technologies Committee
Witnesses
Tuesday 17 March in Committee Room 2, Palace of Westminster
At 10.30am
- Karim Palant, UK Public Policy Manager, Facebook
Possible areas for discussion
Would Facebook accept that it should be fined if it consistently promoted content in the news feed containing misinformation and hate speech that had already received a large number of views? If not, what level of regulation would Facebook actually support?
What is the role of human moderation in improving online experiences? How can these processes be made more transparent and consistent? Has Facebook considered creating a public database of anonymised archetypes based on its content moderation decisions, and developing a system of precedents to ensure greater equity in decision making? Why has it decided that the decisions of its oversight board should not create binding precedents?
What aspect of its Third-Party Fact Checkers checking politicians' content does Facebook object to? Is it the adding of a link to a fact check, the overlay indicating that it is false, or the associated down ranking? Has Facebook considered allowing fact checks of politicians with fewer of these features?