POSR0008
Written evidence submitted by Careful Industries
1.1. We welcome this inquiry from the Public Accounts Committee into the preparedness for online safety regulation. Our evidence considers four primary challenges that the government and Ofcom are likely to face with the Act’s delivery. Firstly, we consider the impasse between end-to-end encryption and the safety of children online. Secondly, we consider the over-reliance on age assurance technologies, which are not proven to be effective.[1] Thirdly, we consider issues regarding freedom of speech, and finally we provide our assessment as to whether the regulatory approach is sufficiently future-facing.
1.2. Our evidence is, in part, informed by the experience of our Executive Director, Rachel Coldicutt, as a non-executive director on Ofcom’s board until her resignation in September 2023 and research that Careful Industries undertook for DCMS in a Safety Tech sector scoping project in 2022.[2]
1.3. It is worth restating that, despite the Bill’s intentions, absolute safety does not exist – neither online or offline. The Government’s stated aim of making the UK ‘the safest place in the world to be online’ is impossible to achieve.
1.4. As we have highlighted in prior evidence submitted to the Draft Online Safety Bill Committee, trends in technology – and social media in particular – develop quickly and can be difficult to predict.[3] This risk-based approach to legislation will make it difficult for Ofcom to swiftly respond to emerging harms.
1.5. Ofcom as a regulator has made good progress and we do not doubt the capability of Ofcom’s team. Sadly, the regulator’s preparedness is hindered by legislation which will be impossible to implement in full, as it relies on technologies that either do not yet exist or have not been proven to be effective.
1.6. The slow progress of the Online Safety Bill through Parliament means there is reluctance in some quarters to recognise this lack of effectiveness: understandably, there is a strong desire to commence the regulatory regime after so many delays. However, a refusal to engage with the actual capabilities of “safety tech”, which is itself an unregulated field, may fundamentally undermine the regulator’s ability to oversee and implement the delivery of a high-quality approach to online safety.
2.1. There remains an impasse between the Bill (Act)’s stated aims to protect people from harms online and the use of end-to-end encryption (E2EE) in products and services. Despite the Bill’s passage, this issue has not been resolved, and Ofcom will face significant challenges in balancing this conflict.
2.2. Client-side scanning – a proposed solution to navigate E2EE and prevent the harms of CSAM and other illegal content – is neither fully effective from a technical perspective, nor does it guarantee privacy. [4] For example, in the EU, Europol “pushed for unfiltered access to data… with a view to training AI algorithms.” [5] A Europol official is quoted as saying that “all data is useful and should be passed on to law enforcement… because even an innocent image might contain information that could at some point be useful to law enforcement.”
2.3. With the legislation adopting a risks-based, downstream approach to regulation, and with technologies such as client-side scanning currently proving to be ineffective, Ofcom will not be able to deliver absolute safety online. Ofcom will likely face significant political pressure to deliver, despite the fact that no feasible technologies currently exist.
2.4. It is important to remember that, as in the offline world, there is no guarantee of absolute safety for children or anyone else. While many risks can be removed or minimised, there is no such thing as a truly risk-free environment; pretending that there is leads to the setting of unrealistic targets that cannot be kept. The passing of the Online Safety Bill into law does not automatically make the UK “the safest place in the world to be online” if the accompanying regulatory regime cannot be effectively delivered.[6]
3.1. The Act relies heavily on age assurance technology to limit the potential harms to children online. These technologies, however, are often inaccurate, easy to circumvent, and can entrench biases – particularly those which rely on machine learning.
3.2. Research commissioned by the Information Commissioner’s Office recognises that “age assurance techniques are, at present, a nebulous concept with multiple different methods, approaches, measurement challenges and propensity to define.”[7] The report goes on to specify sixteen different methods of age assurance, ranging from the simplistic – email verification – to more complex and medicalised methodologies, including facial, iris and gait analysis.
3.3. Using artificial intelligence to assess age based on physical traits runs the risk of embedding bias into age assurance techniques. This is because the “defaults” for many data sets have been proven to overprioritise the physical characteristics of white males as the standard.[8] The lack of external standards and audits for these technologies, which currently self-report effectiveness, means there is little transparency around either inputs or measures taken to debias these technologies. This means the technologies meant to deliver assurance may themselves end up creating new “hidden harms”.
3.4. Age assurance technologies are neither inherently safe nor foolproof – for example, even in circumstances where age assurance technology successfully prevents a 12 year old from accessing a website, there are plenty of alternative routes for said child to encounter harm online. These might include something as simplistic as picking up a device belonging to an older sibling or other family member, using a shared log-in, or using another family member’s identification to provide proof of age.
3.5. Moreover, as any parent knows, children can be extremely ingenious. Recent research commissioned by the Digital Regulators’ Cooperation Forum shows that families tend to work around third-party age assurance methods in realistic ways,[9] particularly when systems might be difficult to use or non-standard in their execution.
3.6. In order to ensure the safety tech sector is held to account, researchers, advocates, and members of civil society will “need to be well-funded formal channels and have formal channels of communication, influence and feedback with the safety tech sector.”[10] Similarly, a “safety tech standards and scrutiny body” should be created, “prioritising lived experiences of harm over industry representation.” [11]
4.1. Ofcom will face significant challenges in ensuring and demonstrating that the Bill maintains freedom of speech, particularly if technologies such as client-side scanning are eventually deployed. As highlighted earlier in this submission, overreach by law enforcement agencies is already manifesting in the EU, where the Digital Services Act is in force.
4.2. It is entirely possible that this overreach could happen in the UK, particularly in light of the Government’s recent restrictions on the right to protest, previous proposals to revoke the Human Rights Act, and recurring suggestions of leaving the European Convention on Human Rights. This potential overreach is most likely to impact those who are more vulnerable and those from minoritised backgrounds and groups.
5.1. Ofcom – and the Government – will face major difficulties in making the UK "safest place in the world" to be online. The statement will be tough to prove when things go wrong vulnerable people are harmed.
5.2. Even assuming that the Government and Ofcom acknowledge that this is unachievable, it is notable that the NAO state that Ofcom is not expected to reach full capacity on regulation and enforcement until 2025 at the earliest. The unpredictable development of technology, combined with a risk-based approach to legislation, mean that the UK is faced with a regulatory approach which will not adapt quickly to new harms, and is insufficiently future-proofed.
5.3. There is significant potential that new harms may emerge over the course of the next twelve months from Royal Assent onwards. This is particularly likely in the lead-up and to the next general election and with probable (and unpredictable) new developments around generative AI, such as the use of chatbots. The risk-based approach means that new harms cannot be addressed swiftly.
5.4. Ofcom and the Government will need to communicate the timeline of regulatory enforcement clearly in order to avoid an erosion of public trust in online safety regulation.
October 2023
[1] Information Commissioner’s Office and Ofcom, “Revealing Reality: Families’ Attitudes Towards Age Assurance”, September 2022
[2] Ipsos, Perspective Economics and Careful Industries, “Trust, Safety and the Digital Economy The Commercial Value of Healthy Online Communities, July 2022
[3] Rachel Coldicutt, ‘Written Evidence Submitted by Rachel Coldicutt, OBE (OSB0153)’, Written evidence submitted by Rachel Coldicutt, OBE (OSB0153), September 2021, https://committees.parliament.uk/writtenevidence/39327/html/.
[4] Shubham Jain, Ana-Maria Crețu, and Yves-Alexandre de Montjoye, ‘Adversarial Detection Avoidance Attacks: Evaluating the Robustness of Perceptual Hashing-Based Client-Side Scanning’, 2022, 2317–34, https://www.usenix.org/conference/usenixsecurity22/presentation/jain.
[5] Apostolis Fotiadis Zandonini Luděk Stavinoha, Giacomo, ‘Europol Sought Unlimited Data Access in Online Child Sexual Abuse Regulation’, Balkan Insight (blog), 29 September 2023, https://balkaninsight.com/2023/09/29/europol-sought-unlimited-data-access-in-online-child-sexual-abuse-regulation/.
[6] DCMS, “Online Safety Bill: Supporting Documents”, 18 January 2023, https://www.gov.uk/government/publications/online-safety-bill-supporting-documents#what-the-online-safety-bill-does
[7] Age Check Verification Scheme, “Measurement of Age Assurance Technologies: A report for the Information Commissioner’s Office”, 2022 https://ico.org.uk/media/about-the-ico/documents/4021822/measurement-of-age-assurance-technologies.pdf
[8] Thaddeus L. Johnson, Natasha N. Johnson, “Technology Can’t Tell Black People Apart: AI-powered facial recognition will lead to increased racial profiling”, Scientific American, 18 May 2023; Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, Proceedings of Machine Learning Research, 81:1–15, 2018; Abeba Birhane and Vinay Prabhu, “Large image datasets: A pyrrhic win for computer vision?”, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021
[9] “Regulating Reality”
[10] Rachel Coldicutt, ‘Three Recommendations to Improve the Safety of Safety Tech’, Medium (blog), 2 December 2022, https://rachelcoldicutt.medium.com/three-recommendations-to-improve-the-safety-of-safety-tech-7bbf956ca215.