Are social media companies doing enough to moderate harmful content?
The Science, Innovation and Technology Committee will hear from experts in the third session of its inquiry into social media, misinformation and harmful algorithms.
In the first panel, the cross-party committee will discuss how online advertising relates to the business models of social media companies, and the algorithms their platforms use.
MPs could explore the monetisation of harmful content on social media, as well as the complexity and opacity of the digital advertising system and its contribution to online harms. Members could also ask about regulation of the digital advertising market, previous government interventions in this area, and whether the Online Safety Act will create a safer online advertising space.
Meeting details
In the second panel, the committee will discuss the wider landscape of content moderation and online safety with experts who have worked closely with social media platforms.
Members could seek views on whether platforms such as Meta, TikTok and X did enough to moderate misinformation and harmful content following the 2024 Southport attack, and whether these platforms are sufficiently committed to online safety. They may also explore recent changes to content moderation practices and policies by some platforms, ways to deal with misleading content, and whether the Online Safety Act is fit for purpose.