POSR0006

 

Written evidence submitted by Dr Lindsay Balfour, Professor Adrienne Evans, Dr Marcus Maloney, Dr Sarah Kate Merry, Centre for Postdigital Cultures, Coventry University.

 

Context: Our research is concerned with how people “are increasingly living their lives online”[1], but also how the online and offline worlds are gradually becoming indistinct. In 2023, we produced a report, Postdigital Intimacies for Online Safety, based on research undertaken in collaboration with 12 cross-sector partners[2]. The report sought to respond to the Online Safety Bill as it was going through parliament, with the intention of identifying strengths, weaknesses, and opportunities in regard to: digital health technologies; technology-enabled and image-based abuse; boys, men, and ‘toxic communities; and mental health and vulnerabilities. As Baroness Morgan wrote in the foreword to Postdigital Intimacies for Online Safety, “the recommendations in this report are useful not just at this moment as the legislation is, hopefully, approved but also as the lengthy implementation process unfolds”.

 

Recognising the importance of our report to the implementation of the Bill, we submit evidence that pertains to the preparedness for online safety regulation in the UK, and in response to the National Audit Office report on this. Our findings complement Ofcom’s research focus on service providers, whilst further demonstrating the significant knowledge available across sectors, including the tech industry, charity organisations, and those that provide advocacy and support, as well as academic knowledge. The evidence we document below is not only representative of our cross-sector engagement, but is flexible and adaptable to the changing landscape of digital life and culture. We present this evidence through key recommendations from our report, highlighting the particular implications for this inquiry.

 

 

Key recommendations

 

  1. Regulations will need to be responsive to an adaptable and dynamic context that recognises the interconnectedness of online, digital, and technological.

 

Our participants noted the crosscutting way that harms and safety are generated, shaped not only by social media or online platforms, but by digital technology and devices and their interaction with material worlds. For example, what is colloquially referred to as ‘cyberflashing’ was included in the Bill, with the urgency around this emerging from Covid-19 lockdowns and the heightened reporting of such abuse taking place in this context (although of course the practice of cyberflashing existed years before). Such images are often taken on mobile phones, shared via Wi-Fi, Bluetooth or short-range connectivity, and have the capacity to interact with forms of public sexual harassment, in ways that can make it online, digital, technological, and ‘real life’.

 

In the future, statutory interventions will need to be developed which provide greater regulatory oversight of newer and developing harms, including: deepfake video and photography; doxing; AI-enabled abuse; and the risks and harms facilitated by immersive, VR, and ‘metaverse’ technologies and platforms. As recognised by our partner Suzy Lamplugh Trust in our report, stalking and fixated/obsessive behaviours are part of an online pattern of behaviours, but they are also part of the landscape of GPS, tracking, and Bluetooth technology that occurs in excess of or without access to Wi-Fi. AI-enabled abuse will need particular attention in relation to who is causing harm, and how we seek to regulate it. In the short term, this will require non statutory guidance from the research community.

 

Therefore, our recommendations are that a) part of the preparedness for regulation is a commitment to the larger picture of what ‘harm’ and ‘safety’ mean in a digital age, b) Ofcom, along with a wider community of stakeholders, will need to engage in constant fact-finding and research on new and emerging technology, and c) particular attention should be given to AI-enabled abuse as this develops. Adopting these recommendations will ensure the UK has inbuilt structures for regularly assessing and maintaining definitions and oversight of developing technology and the consequences of our increasingly complex landscapes of, and capacity for, online, digital, and technological abuse.

 

 

  1. There will need to be robust and transparent assessment criteria for technologies, apps, and platforms.

 

The UK’s preparedness for online safety regulation will need to consider the design-stage of service providers’ contributions to wider society in a way that forefronts safety. Reportage of which services are taking seriously their commitment to users’ safety should be undertaken, and good practice needs to be shared across the sector. As reported to us by our partners, this will ensure that regulations do not become only a matter of removing harmful content, but of preventing that harmful content existing in the first place.

 

Where harm does occur, safety regulations need to ensure that reporting is robust. Our participants noted that, especially in the case of gender-based violence, the act of reporting can have retraumatising effects. This is particularly the case when reporting mechanisms place exceptionally high thresholds on proving that harm or repeated harmful behaviour has taken place. For some, even successful reporting falls to fully resolve harm, for example in cases discussed by the Revenge Porn Helpline, where prosecution did not result in the destruction of images or video, which could therefore be used again to perpetuate future abuse.

 

Our first recommendations, therefore, are that Ofcom a) devises reporting regulation for online safety that is easy for the public to understand, b) identifies what could be considered reasonable proof while protecting the wellbeing of those reporting harm, and c) ensures the harm caused is not repeated – either through the reporting process or through harmful material still circulating. Regulatory criteria must be developed to address these issues in emerging apps and platforms.

 

We note here too that there is a void of policy surrounding the digital health industry. This has potentially harmful consequences for women in particular, who are adopting intimate digital health technologies (i.e. ‘Femtech’) in their millions, even while users have identified concerns. For example, the Information Commissioner’s Office (ICO) has recently announced an inquiry into the Formal dubious data privacy records of the FemTech industry. Early polls commissioned by the regulator suggest that over half of users are concerned about how their data is being used, and the ICO has issued a Call for Evidence, urging users to share their experiences.

 

Echoing the Royal Society for Public Health’s ongoing Scroll Free September Campaign, our partners also voiced concerns about the more generalised ways in which the fundamental design of apps encourages ceaseless and addictive levels of engagement, thereby exacerbating other issues we highlight here. Amendments to the Bill have identified the need for research on the role of app stores in facilitating harm; however, we believe that the Bill has missed an opportunity to call for more safety regulation in an industry where policy is sorely lacking. Our recommendation is that such regulation is urgently needed, and can be enabled much earlier with evidence already available from within the research community.

 

 

  1. User empowerment features included in the ‘triple shield’ should be on by default in all platforms.

 

The triple shield, which replaced the adult safety duties (or ‘legal but harmful’ provisions) within section 12 of the Bill, requires platforms to remove illegal content, remove content against the site’s own terms and conditions, and to provide adult users with tools enabling them to avoid potentially harmful content. These tools will be off by default, which means that adult users must make the decision to protect themselves; this continues to be the case despite recommendations to the contrary, made both within and outside Parliament. As a result, adults who do not recognise their own vulnerabilities, including as a result of mental health problems or neurodivergence, may be more likely to encounter harmful material.

 

An amendment agreed in the House of Lords, prior to the passing of the Bill, requires platforms to force users to make a decision to opt in or opt out before using a specific feature or service[3]. However, this does not resolve our concerns about the potential for harm if vulnerable adult users do not, or cannot, choose the highest levels of protection for themselves. Our recommendation, as in our report, is that there should be a regulatory requirement for these adult empowerment tools to be on by default.

 

User empowerment should also include more robust regulation of data collection practices, even in cases where there is no ‘harmful’ content. This is especially the case in digital health where intimate personal information is often given in order to access content or services, but then may be shared with or sold to third parties. OSB implementation must: a) include more regulation of privacy policies to ensure they are robust and transparent; b) require providers to request consent for collection and use of user data on an ongoing basis, not just on users’ initial agreement to terms and conditions; and c) legally oblige service providers to develop better safeguarding mechanisms for the collection and storage of personal and sensitive data.

 

 

  1. Signposting for harm reduction and risk assessment strategies within implementation efforts must go further in challenging harmful cultural practices.

 

While the Bill places greater onus on platforms and digital service providers to moderate harmful content, our analysis of the sector suggests that online safety and regulation needs to take a more proactive approach. This is recognised in the National Audit Office’s report by noting the “statutory duty to promote media literacy so that the public become informed digital decision-makers and so can protect themselves and others against harmful content” (p.33). Promoting such media literacy was a key recommendation of our report. Based on our research, we suggest that individual protection (the user protecting themselves and others) also rests on challenging a wider culture of toxicity, including racism, homophobia, transphobia, ableism, sexism, and misogyny.

 

In terms of fostering a ‘critical media literacy’, we would recommend that this includes a commitment to increase users' capacity to interpret and evaluate information online and to make informed choices around both content consumption and appropriate online behaviours. It should also recognise the wider social structures and how they influence and shape harm in online spaces. Specifically, we recommend that digital literacy efforts be built into the implementation strategy of the Bill by mandating that service providers a) offer signposting for harm reduction and risk assessment strategies, and b) include literacy resources, toolkits and/or reporting mechanisms within their platforms. We also recommend that Ofcom incorporate multi-sector voices and perspectives into the three-year delivery plan for media literacy.

 

 

  1. There should be fuller recognition of violence against women and girls (VAWG) and challenges to the wider culture of sexism and misogyny that shape digital gender-based violence.

 

As recognised by our partners, VAWG is endemic, and a key national threat of digital and non-digital worlds. We applaud the decision to amend the Bill to include risks to women and girls online, and the requirement for Ofcom to consult with the Domestic Abuse Commissioner and VictimsCommissioner to ensure victim-survivor voices are represented when implementing online safety guidance. We welcome the additional four ‘revenge porn’/intimate image abuse offences, and the inclusion of consent in the Bill’s articulation of some of these offences.

 

As other groups have noted, including EVAW and Refuge, this is a step in the right direction – although the amendments do not currently meet the VAWG Code of Practice. We note specifically that these amendments propose “measures that services can take to reduce the risk of harm to women and girls, and which demonstrates best practice”[4], a suggestion that was made in our report when partners talked about providing examples of ‘gold standard’ digital infrastructure for addressing VAWG. Those discussions were oriented around the use of safety-by-design approaches, including forms of co-design and co-production, which could usefully be drawn on in identifying best practice. Thus, our recommendations include that a) Ofcom should fully adhere to this Code of Practice when drawing up or amending its guidance for protecting women and girls online, and b) ways of implementing ‘safety-by-design' are explored in articulating measures that service providers can take to reduce VAWG.

 

There are further unaddressed issues of how implementation will be shaped by AI and/or human moderation, and the attendant risks and opportunities of both in relation to VAWG. In addition to the recommendations above, we therefore also suggest that implementation recognises the problems of AI content moderation. On the one hand, there is a need to acknowledge that AI is built on societies that are producing forms of sexism, misogyny, and gender-based violence – with AI harms and AI-facilitated abuse being a growing concern. This may reproduce inequalities shaped by protected and combined characteristics, censor non-normative bodies and identities, and may neglect protecting vulnerable groups. Equally, since VAWG is endemic, human moderation alone may be insufficient to deal with the volume of harmful content. A nuanced discussion and application of content moderation will need to forefront VAWG as a key concern to ensure a fairer and safer digital culture for all.

 

 

Significance of recommendations

 

The evidence submitted above reflects a truly cross-sector approach to the implementation of online safety. Our work has brought together stakeholders from the advocacy, academic, policy, healthcare, and technology sectors to demonstrate the significance of collaborative approached to the implementation of the Bill. While Ofcom is the named regulator, our research shows that multiple perspectives will have to be brought in to ensure buy-in from both users and platforms. The implementation of the OSB must be robust, but flexible enough to account for inevitable shifts in online and digital life and the often rapid and contradictory ways in which technological cultures and relationships change.

 

October 2023


[1] National Audit Office (2023, 6 June). Preparedness for online safety regulation: Ofcom, Department for Science, Innovation & Technology. Report, National Audit Office.

[2] Southwest Grid for Learning (SWGfL), Organisation for the Review of Care and Health Applications (ORCHA), BigHealth, Bumble, Revenge Porn Helpline, Suzy Lamplugh Trust, Men’s Health Forum, MensCraft, Chilypep, BAM Construction, Samaritans, and Carnegie UK Trust.

[3] https://bills.parliament.uk/bills/3137/stages/17765/amendments/96063

[4] https://www.gov.uk/government/news/online-safety-bill-bolstered-to-better-protect-children-and-empower-adults