Skip to main content

UCLA academics discuss moderation processes and algorithmic design

26 February 2020

The Committee will today hear from academics at UCLA's Centre for Critical Internet Inquiry on moderation practices of social media platforms and algorithmic design.

Witnesses

Wednesday 26 February in Committee Room 3, Palace of Westminster

At approximately 4.00pm

  • Professor Sarah Roberts, Co-Director, UCLA Centre for Critical Internet Inquiry
  • Dr Safiya Noble, Co-Director, UCLA Centre for Critical Internet Inquiry

Possible areas for discussion

  • How do the content moderation policies and algorithmic designs of major technology platforms shape democratic discussion online?
  • What should best practices in moderation look like? Do moderators have enough time and contextual information to make reliable decisions? Do the labour conditions that moderators face impact the decisions that are made? Can it be right that 'moderators' are at times required to sign NDA’s; the effect of which is to encourage suspicion of malpractice?
  • What role could civil society play in improving content moderation? Can any lessons be learned from Facebook's third-party fact checking network that could be generalised more widely to content moderation?
  • How can the algorithmic design of large platforms further discrimination against certain groups? What could be done to reduce the amount of bias in the design and operation of these platforms?
  • Does the algorithmic design of platforms cause new behaviours or exacerbate behaviour that users already engage in?  Is the recommendation of extreme and polarising content primarily driven by the design of algorithms or by users' preference for this type of content?
  • To what extent would greater transparency in the moderation process and algorithmic design behind technology platforms be an improvement? What should this transparency look like?

Further information