Written evidence submitted by Compassion in Politics (OSB0050)

 

Will the proposed legislation effectively deliver the policy aim of making the UK the safest place to be online?

 

1. While we welcome the Online Safety Bill and it’s objective of creating a safe online environment through the (re)design of social media sites and search engines, we do not believe the current draft goes far enough in requiring or implementing reform. We recommend that the following changes be made to the Bill to ensure it achieves its objectives.

 

2. Tackle the problem of anonymous accounts. The majority of abuse and misinformation spread online comes from anonymous accounts[1] - reducing their number and reach will therefore help to reduce the amount of abusive or misleading content found on social media sites. We do not, however, believe anonymity should be completely abolished: there are legitimate reasons why, for example, a whistleblower, individual fleeing domestic abuse, or member of the LGBTIQA+ community may need to remain anonymous online. We propose that every social media user should be offered the opportunity to gain a “verified” account (by uploading a single piece of personal identification). All users - verified or not - should then, in addition, be given the option of screening out unverified users from their feeds and DMs. Our research shows that this would be a popular proposal: 81% of people surveyed said they would be happy to provide a piece of personal ID in order to gain a “verified” account if this helped to reduce the reach and number of anonymous (or “unverified”) accounts.[2]

 

3. Reduce harm to adults by design. As it stands, social media platforms will be required to minimise the presence of illegal content on their site and prevent the circulation of content that is harmful to children (Part 2, Chapter 2, “Safety duties for services likely to be accessed by children”). However, no such requirement will be created by the Bill for content that is harmful to adults. Platforms will need to produce terms of service for their users to follow (Part 2, Chapter 2, “Safety duties protecting adults: Category 1 services”) but this completely shifts the duty of responsibility away from the tech platforms and runs counter to the Bill’s objective of reducing abuse by design. The requirement for social media companies to prevent the circulation of harmful content should apply for all ages.

 

4. Set minimum standards. The Bill should empower Ofcom to set minimum standards for social media platforms’ terms of service. Otherwise, there will be a perverse incentive for platforms to produce weak terms: their profit margins depend on enabling as many people as possible to share as much content as possible, no matter what the repercussions. We need to draw a very clear line in the sand about the type of material that should not be shared online and the actions platforms must take to remove it.

 

5. Establish a clear and intersectional definition of “harm” that encompasses individual and societal harm. The current definition of harm (Part 2, Chapter 6) is too vague - a vagueness that social media companies might exploit to do the bare minimum required to comply with their obligations. While we appreciate that further examples of a “harm” may well be set out in statutory instruments we do believe that the Bill should go further in providing a firm and practicable definition that can be used as a reference principle. Our own research suggests that the public would support a definition of harm that encompassed not just the impact of the action but the nature of the action itself. Over 50% of the people we surveyed agreed that all of the following behaviours constitute a “harm”: sharing intimate pictures of someone without their consent (65%); intimidating or threatening someone (65%); spreading false information about an individual (63%); posting information that could facilitate someone to commit a harm (61%); being sexist, racist, homophobic, transphobic, ableist, or ageist (58%), spreading false information about an issue or organisation (53%); and using aggressive language (50%).[3]

 

6. Further, the Bill should acknowledge that different communities are more likely to be targets for harm. That basic principle is already set out in the Equality Act 2010 and the Online Safety Bill should recognise and promote the principles of the Act. To achieve this a requirement should be created for internet companies to tackle abuse directed at individuals with protected characteristics as a priority. Further, Ofcom should be instructed to develop specific codes of practice for internet companies on how they should tackle abuse directed against groups with a protected characteristic. Such codes should be developed in partnership with representatives of the groups affected.

 

7. Finally, the Bill must make clear that harm can be both individual and societal. The current definition seems to imply that harm can only be inflicted on an individual but this is evidentially untrue. The use of violent and degrading language online can inspire physical acts of aggression offline. Fake news about, for example, climate breakdown or Covid, could inhibit government or public action on these issues of national wellbeing and security.[4]

 

8. The definition of harm must therefore be expanded to include societal harms - most obviously fake news, myths, and abusive or stereotyping language used to describe a particular ethnic or religious group, gender, sexuality, age, or political profile.

 

9. In addition, we recommend that a specific clause be inserted into the definition of harm to include willful misrepresentation by an individual in elected office. The spread of false information by elected officials on social media is a matter of serious importance that no legislation has yet addressed. This is a significant oversight: elected officials influence public opinion and their statements are often the primary source for media reporting and news stories. Their ability to influence the agenda and public zeitgeist in this way has been enhanced by social media which has exponentially increased their reach and speed of communication. A requirement must therefore be created that statements “of fact” by elected officials online should be verifiable.

 

10. Clarify and explain the sections on “journalistic” and “democratic” content. We are concerned that both of these sections could, without further explanation, significantly undermine the overall objectives of the Bill.

 

11. We question the basic premise of the section on journalistic content. We do not believe that any profession should be given an exemption to the basic principles of the Bill: to avoid harm, reduce misrepresentation, and create a safe online environment for all.

 

12. We also believe that the wording of this section could prove problematic. The definitions set out in Part 2, Chapter 6 indicate that any group of individuals, regardless of their intentions, could form a “news publisher” - so long as there is an appointed editor and complaints process - and in so doing benefit from the exemption carved-out for journalists.

 

13. Our position is that this section should not be included in the final iteration of the Bill but if it is, safeguards should, at the very least, be put in place to ensure that this clause only applies to professional journalists and that those journalists are not deliberately exploiting the clause to spread abusive or misleading content.

 

14. Similarly, the “democratic content” exemption (Part 2, Chapter 2) is open to exploitation. The actual definition of “content of democratic importance” is not at all clear nor is that of a “live political issue.” We are concerned that this vagueness could be exploited to spread abuse or misinformation: individuals sharing purposefully contentious or controversial material may contend that they are performing a democratic duty in doing so. Rather than creating a loophole to spread misinformation and abuse, this section of the Bill should be used to improve the standards of online political debate by stipulating that information shared about topics of national significance must, where appropriate, be verifiable and information sent by an elected official - or individual seeking election - should carry an imprint.

 

15. Require that social media platforms be transparent about their processes for dealing with user-generated complaints. While the Bill does require that social media platforms publish details of their complaints process (Part 2, Chapter 3), they are under no obligation to report on how it works in practice. Investigations have proven that this is a significant problem: while platforms may promote a robust complaints procedure, the reality is that complaints are often dealt with by overworked, poorly-trained staff working long hours in hostile conditions.[5] This is evidently no way to ensure that complaints are dealt with fairly or effectively. To help improve standards, social media companies should be required to report to Ofcom on: a) How many staff they employ to handle user-generated complaints; b) The training those staff receive; c) The number of complaints the staff review per hour; d) Anonymised information on whether those complaints are upheld or rejected.

 

Does the draft Bill make adequate provisions for people who are more likely to experience harm online or who may be more vulnerable to exploitation?

 

16. It does not. As explained above, this could be significantly improved by creating an intersectional definition of “harm” (effectively requiring that social media companies prioritise the abuse directed against protected groups).

 

Is the “duty of care” approach in the draft Bill effective?

 

17. The idea of establishing a “duty of care” is the right approach.

 

18. The objective is undermined within the Bill: by failing to require that platforms prevent the circulation of content that is harmful to adults (see paragraph 3) and, as explained later, by allowing those platforms to define what constitutes “legal but harmful” content and the steps that should be taken to moderate it (see paragraph 39).

 

Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content? What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?

 

19. As explained above the Bill fails in its intention to focus on systems and processes in two principle ways: by failing to require that platforms “prevent” the circulation of content that is harmful to adults and by overlooking the need to create minimum standards for social media sites’ terms of service. Failing to create a baseline will inspire a race to the bottom: platforms may out-compete one another to have the weakest standards, using this to attract users who want to share controversial and abusive content.

 

20. Regarding algorithms we believe there could be scope for the Bill - perhaps in secondary legislation - to set out a framework for how algorithms should be deployed by social media companies. For example, they could be required to deploy them to: reduce the spread of false or abusive content, flag content as unreliable, stop abusive DMs, and limit the reach of content from anonymous accounts. Simultaneously they should be prevented from deploying algorithms to promote controversial or contentious, harmful, or false content. This could be set out in a general principle that algorithms can only be used to achieve a “social good”. Users should also be given greater and easier control over the algorithms that affect the content they view online.

 

Does the proposed legislation represent a threat to freedom of expression, or are the protections for freedom of expression provided in the draft Bill sufficient?

 

21. The distinction often drawn between “freedom of expression” and regulation is a false one.

 

22. Firstly, “freedom of speech” should never include the freedom to abuse or spread false information willfully. The reality is that there must always be a limit to freedom of expression and we have to be open and honest about that fact.

 

23. Secondly, creating protections actually enhances freedom of expression. The current climate of hostility, toxicity, and abuse online prevents many people from joining social media sites. Even those who join are often inhibited from sharing their view for fear of abuse: our polling found that 1 in 4 (27%) are scared of voicing an opinion online because they expect to receive abuse if they do so.[6] A more inclusive and less hostile online environment will encourage hidden and minority voices to share their views and stories.

 

24. In addition, our polling suggests that the current political concern with “freedom of speech” may well be misplaced. Asked which is more important, freedom of expression or freedom from abuse, our polling found that 60% of the public prioritise the latter, just 24% the former.[7]

 

The draft Bill specifically places a duty on providers to protect democratic content, and content of journalistic importance. What is your view of these measures and their likely effectiveness?

 

25. See Paragraphs 10-14.

 

Earlier proposals included content such as misinformation/disinformation that could lead to societal harm in scope of the Bill. These types of content have since been removed. What do you think of this decision?

 

26. This is a bad decision. Misinformation and disinformation can cause untold harm, particularly to society. Our stance and recommendations on this are set out in paragraphs 7-9.

 

What would be a suitable threshold for significant physical or psychological harm, and what would be a suitable way for service providers to determine whether this threshold had been met?

 

27. We question the benefit of the term “significant” since it appears to focus discussion too readily onto the upper limits of harm. Rather we would have the Bill aim at eliminating a set list of harms - one that can be added to as required. See Paragraph 5 for details of the harms the public believe should be moderated. This is not meant to be exhaustive but it does give an idea of the type of harms that the public wish to be protected against.

 

29. The list should also include societal harms - fraud, electoral manipulation, scams, intellectual property crime, and, of course, terrorism activities and issues of national security.

 

Are the distinctions between categories of services appropriate, and do they reliably reflect their ability to cause harm?

 

30. We are not opposed to the idea of creating categories of services but would urge the committee to consider recommending the following two amendments to the current proposals.

 

31. Firstly, in terms of criteria (Schedule 4) we believe that the reach or influence of a platform should be considered. Some sites may not have a large user base but because of the inflammatory nature of the material hosted by the site - and because of the wider networks its users may belong to - they could have the potential to cause significant harm to individuals and society.

 

32. Secondly, the duty to protect adults from harmful content should be extended to all categories (as well as making it a duty to prevent - as outlined above). It makes no sense that in a Bill designed to create a safe online environment, some types of services will be exempt from a duty to protect adults from harmful content.

 

Are there any foreseeable problems that could arise if service providers increased their use of algorithms to fulfil their safety duties? How might the draft Bill address them?

 

33. Clearly the use of algorithms presents a danger to the public and to society at large because these hidden rules are curating and promoting content. They therefore influence what we see, read, and, as Facebook’s own experiments have revealed, feel.[8]

 

34. However, we also acknowledge that algorithms play a central role in the way online companies operate - and they could be a powerful tool in helping to stop the spread of harmful content.

 

35. The key, therefore, is to ensure that algorithms are used for a social good and that they are presented to the public in an easy-to-understand manner that enables users to alter or switch-off the use of algorithms on their account.

 

How much influence will a) Parliament and b) The Secretary of State have on Ofcom, and is this appropriate?

 

36. We are concerned about the powers the draft Bill hands over to the Secretary of State and by the lack of enforcement authority given to Ofcom.

 

37. We are especially worried by the “Secretary of State’s power of direction” (Part 2, Chapter 5). The power to direct social media platforms should rest, in future, with Ofcom. This will ensure that their authority remains intact, their expertise respected, and decisions regarding the acceptability of certain forms of content online is not politicised.

 

38. Further, we are concerned that the enforcement powers granted to Ofcom are skewed towards enforcement for non-compliance - we would like to see their authority to pre-emptively enforce compliance improved. This could be done in two principle ways.

 

39. Firstly, the government proposes that the regulator will tackle “legal but harmful” activity on “category one” platforms by holding those platforms to account for implementing “their own terms of conditions''. This approach has two significant risks. Firstly, it gives social media companies too much power to determine what constitutes “legal but harmful” material. The definition of harm should be determined by the government as the representatives and servants of the public, not by a private company. Secondly, having set the definition for “legal but harmful”, internet companies will then be free to assess what steps are necessary to moderate “harmful” content. The companies could end up not just marking their own homework but setting it as well. The Bill should instead give Ofcom very clear legal powers to uprate the definitions proposed by category one platforms for “legal but harmful” material and the steps they are required to take in order to eliminate that material from their sites.

 

40. Secondly, as outlined above, Ofcom will not be given the power to create a minimum standard for the terms of service internet companies will be expected to produce. Ofcom could be tasked with, for example, creating a template set of minimum standards - along with an ideal set of standards - with the power to direct companies to improve their terms when they fall short.

 

Contact

Matt Hawkins - Co-Director of Compassion in Politics: matt@compassioninpolitics.com

 

20 September 2021

 


[1] Clean Up The Internet: https://804a57b7-e346-4cab-9bec-ef5466085646.usrfiles.com/ugd/804a57_668756eb291e4eddbfd8cfe4b29b9a07.pdf

[2] https://d3n8a8pro7vhmx.cloudfront.net/cip/pages/304/attachments/original/1614359034/OP16168_CiP_Support_for_Glorification_of_Violence_Policy_-_Tables.xlsx?1614359034%20

[3] https://d3n8a8pro7vhmx.cloudfront.net/cip/pages/304/attachments/original/1631034552/OP17523_OSB_research_for_report.xlsx?1631034552

[4] See for example: https://onlinelibrary.wiley.com/doi/full/10.1111/hir.12320

[5] https://www.vice.com/en/article/ywe7gb/the-companies-cleaning-the-deepest-darkest-parts-of-social-media

[6] https://d3n8a8pro7vhmx.cloudfront.net/cip/pages/304/attachments/original/1607349964/OP15716_Compassion_in_Politics_-_Online_abuse_2.xlsx?1607349964

[7] https://d3n8a8pro7vhmx.cloudfront.net/cip/pages/304/attachments/original/1629381583/OP17523_Compassion_in_Politics_Online_Safety_Bill_Second_Wave_-_Tables_v2.xlsx?1629381583

[8] https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds