Written evidence submitted by SWGfL (OSB0054)

The following is a written submission from SWGfL - Safety & Security Online

SWGfL is a not for profit charity ensuring ‘everyone can benefit from technology free from harm. Forming 1/3 of the UK Safer Internet Centre - Part of the European Insafe network of national centres, our experts advise Governments, schools, public bodies, European Agencies and industry on appropriate policy and actions to take in regards to safeguarding and advancing positive online safety.

SWGfL has been at the forefront of online safety for the past two decades, delivering engaging presentations and training to a wide variety of audiences nationally and internationally. Our work has brought online safety to the forefront of public attention, ensuring everyone can develop their understanding of what online safety truly means in an ever changing world.


General Comment

A general comment relating to the draft Online Safety Bill is its complexity.  Whilst SWGfL appreciates drafting legislation is by its very nature complicated, using readability tools[1] produces a score of 22.14, meaning it is typically requires 22 years of formal education.  Coupled with this, the extent of cross references means it is easy to become lost in the text.

It will be imperative to produce accessible text, especially given that the beneficiaries of the Online Safety extend to children as well as other vulnerable people.


Does the draft Bill make adequate provisions for people who are more likely to experience harm online or who may be more vulnerable to exploitation?


Does the draft Bill give sufficient consideration to the role of user agency in promoting online safety?


SWGfL considers three specific omissions; Victim support, Reporting/Complaints Procedures, impartial dispute resolution and age verification alongside further comments about report/complaints procedures

1)      Victim Support

The Online Safety Bill must provide provision for supporting victims of online harms

Over the last 12 months SWGfL has seen a significant increase in cases reported to both the Revenge Porn Helpline and also ReportHarmfulContent.  To exemplify this the following articulates annual comparisons for Revenge Porn Helpline










Images removed






Much of this widely reported increase is associated with heightened awareness and also the impacts aggravated by Covid lockdown restrictions.  That said, the caseload being managed by the Revenge Porn Helpline in 2021 continues to increase and is anticipated to reach over 4,500 by the end of the year (the year to date caseload for 2021 will surpass 2020 total in September)


It is clear from evaluation data from both ReportHarmfulContent and Revenge Porn helplines, that the impact of harmful content is extremely distressing and can have a devastating effect on mental health.  The following extract from Report Harmful Content Annual Report 2020 | SWGfL highlights this issue


The high proportion of clients experiencing negative mental health impacts as a result of

witnessing harmful content online is concerning. 32% of total clients reported negative mental health impacts. This figure rose to 43% for clients affected by trend one. Of that 32%, 13% of clients described feeling suicidal. For example, one client was being repeatedly harassed by a relative over social media. She had tried to report her issue to the police, with no success. When she made a report to RHC she was desperate. She told practitioners: ‘I have (already) tried to commit suicide with an overdose but she is still carrying on I don’t know what to do anymore other than another overdose’.


Aside from suicidal ideation, other reported mental health impacts included distress (70%),

anxiety (52%), a decline in social functioning (36%), depression (27%), agoraphobia (5%) and

post-traumatic stress disorder (4%). 18% of clients experiencing negative mental health impacts had sought medical treatment (e.g. medication or therapy).


In addition to causing new mental health problems, harmful online content was described as exacerbating existing mental health issues. For example, one client had recently left an abusive relationship. Her ex-partner created numerous fake social media profiles in her name, with the aim of continuing his harassment of her. She told practitioners:


‘I had PTSD because of him and this had settled with a lot of therapy, but has recurred since all this online abuse started again’.


Often, social media had been a positive coping mechanism for clients who were already mentally unwell. Being targeted online threatened this coping mechanism. One client, who was being harassed over social media, told practitioners: ‘I (was) already on medication for my depression and suicide attempts...I don't go online to be abused. As someone with agoraphobia…it is my only way to interact with friends and the wider world. I can feel this slipping away right now’. Finally, mental health impacts went beyond just the ‘victim’ and could also be seen to affect family and friends who reported on their behalf. One friend, acting as an advocate, told practitioners: ‘We are worried for her well-being. She has a history of self-harm and attempted suicide. Unless her ex can be stopped and/or forced to remove the videos I fear for her well-being, let alone my own mental state. I'm currently signed off with depression and anxiety because of this…I'm at my wits end and close to a full emotional breakdown’.


The Online Safety Bill must/should provide provision for victim support services, like the Revenge Porn Helpline and Report Harmful Content.  SWGfL suggests the opportunity to scope sustainable funding, perhaps under the control and governance of the regulator?

2)      Impartial Dispute Resolution

The Online Safety Bill has to provide provision for impartial dispute resolution

SWGfL firmly believes that an impartial dispute resolution service for online harmful content is imperative and the reason that it established Report Harmful Content (RHC) in 2019.  First proposed in 2012, it was recognition that there was no opportunity for users to appeal decisions made by online platforms.  Indeed, many other sectors (Financial, Water, Data, Local Government) all have impartial appeals process. It is the opinion of SWGfL that online users, especially those victims of legal but harmful content, previously had no avenue for independent redress or the opportunity for impartial appeal.

Having formally and finally launched in 2019, RHC is operated by SWGfL as part of the UK Safer Internet Centre and established using European Commission (Connecting Europe Facility[2]) co funding.  RHC is free for users to access and use.

In addition to providing users advice and direction to support their reporting of harmful content, RHC provides users with Impartial Dispute Resolution; those users (over the age of 13) who have already submitted a report to industry but have a complaint, query or dispute with the response they received. RHC review the case and industry responses against platform-specific reporting procedures and community standards in order and:

a)      provide an explanation and validation of the industry response (typically why the content doesn’t infringe the platform standards and was not removed), or

b)      determine that the industry response was incorrect and make representation to the platform for action

Impact of Impartial Dispute Resolution (taken from Through These Walls, RHC Annual Report 2021 | SWGfL)

Report Harmful Content is clearly meeting its objective of helping everyone to report harmful content online. It deals with reports from a range of demographics, across a number of platforms. As is evident from this report, RHC practitioners deal with a wide variety of online harms, the majority of which overlap with other harms and issues, both on and offline. The value of the service lies in the way in which it addresses online harms, not in isolation, but holistically. This is evident through the way in which practitioners draw upon a range of escalation options, support services and referral routes in order to offer support that is uniquely tailored to individual cases. Not only is RHC effective at tackling the complexity of online harm, it is also efficient. The high percentage of content which was successfully actioned by industry, alongside the rapid response rate of industry to practitioners clearly demonstrates this. Furthermore, the low percentages of clients who got back in touch with RHC after being offered advice and/or signposting can be taken as evidence that practitioners are providing precise instructions to clients to deal with a range of online harms and issues. The high level of referrals to RHC from the police, alongside the openness for police to work on cases in conjunction with practitioners, demonstrates the way in which RHC is becoming a trusted service to be used in conjunction with official criminal procedures. Finally, the steady growth in reports as the year progressed evidences the clear and increasing demand for this service. The diversification in reports towards the end of the year also evidences the spread of demand across a broader range of issues. RHC practitioners are keen for the service to expand and develop, however, they are currently working at full capacity. To this end, an increase in funding is desperately needed to meet existing demand and to equip practitioners to deal with the widening range of cases.

SWGfL recommends that platforms (within scope) should be required to operate an impartial dispute resolution procedure; one with fairness, accessibility and transparency and that this should be the job of an impartial third party.  Whilst it is possible that providers could operate a dispute resolution service internally through procedural separation, by definition the impartiality and adjudication will not always be transparent to the user or external observers.  Explicit external separation (the use of a third party) would be required to demonstrate this separation and impartiality to achieve fairness and transparency.

In terms of access to an impartial dispute resolution procedure, SWGfL would signpost parallel examples such as financial sector.  When conclusions of cases or claims are provided to the complainant, accompanying information is provided signposting to the Financial Ombudsman if there is the intention to appeal the decision.  SWGfL suggests that platforms would want, and should be required, to signpost to the impartial dispute resolution procedure at that point. 


3)      Age Verification

The Online Safety Bill has to provide provision for age verification

Whilst recognising the limitations of age assurance/verification processes, SWGfL supports their introduction, particularly for regulating access to adult (18+) content and would expect to see this being introduced as part of the Online Safety Bill.  SWGfL considers that age assurance and age verification is an important tool, primarily to protect those younger children with mild curiosity or limiting accidental exposure.

SWGfL contributed extensively to BBFC in their preparations for the introduction of the Digital Economies Act.  Specifically, this contribution was offering predictions of the likely consequences of age checking.

SWGfL supports the risk based approach to age assurance and age verification systems, proposed by Ofcom as part of the VSP Regulations, rather than focusing on platform size.  Whilst, quite rightly, size of the platform should be a significant contributing factor, SWGfL has long had the concern that by merely applying age verification to the most popular (commercial adult content platforms) services will have the effect of driving users to other smaller platforms; platforms with perhaps less developed policies, fewer resources and capabilities.  (Please note, this comment does not assume that all the larger platforms have adequate resources or policies)


SWGfL has the following comments regarding reporting / complaints provision

It is encouraging to see the inclusion of User reporting and redress duties being defined within the draft Bill. SWGfL supports the duty for providers to operate a service using systems and processes that allow users and affected persons to easily report content alongside ‘a duty to operate a complaints procedure’.  However, SWGfL is not clear of the difference between a content reporting process and complaints process’ as defined in the draft Online Safety Bill.

SWGfL would encourage clarity to avoid confusion

SWGfL has further comments related to the adequacy of coverage of the draft Online Safety Bill regarding reporting and complaints procedures

When thinking about reporting or complaints procedures, trust and confidence is important.  Whilst Ofsted’s recent review of sexual abuse in schools[3] was focused on schools, it does disclose children’s attitudes and drivers behind reporting issues relating to sexual abuse and sexual harassment. 

“The most common reason that the children and young people who answered our survey gave for not reporting an experience was not knowing what would happen next.”

“In focus groups, children and young people told us that deciding whether to report an incident depends on the perceived severity of the incident. For example, children and young people thought they would be listened to if they reported ‘serious’ incidents but would be less likely to report what they see as ‘common’ incidents, such as ‘being asked for nudes’ and ‘comments from boys in corridors’. This is largely because they feel that some of the incidents are so commonplace ‘there’s no point’ reporting them. Some forms of sexual harassment and online sexual abuse have become so normalised for children that they do not see the point in reporting and challenging this behaviour.”

To further qualify this point, users of services such as TikTok, Roblox and Snapchat are generally younger (Khoros, 2020; LSE, 2018) and there is evidence to support the fact that this age group view harmful content online as normal and inevitable (Lavis, 2016; Marchant, Hawton, Stewart, Montgomery, & Singaravelu, 2018). The solution herein lies in research, education and greater awareness raising and an opportunity for the Online Safety Bill to require providers to commit resources to research, education and awareness to build trust and confidence as part of their community.

To reiterate the point raised regarding the use and clarity of language(s); Reporting and complaints procedures should be designed and constructed for the minimum age of their users; i.e if 13 years olds are able to use their service, the language used should be realistically understood and comprehended by a 13 year old (measured using reading indices (eg http://gunning-fog-index.com/).


What would be a suitable threshold for significant physical or psychological harm, and what would be a suitable way for service providers to determine whether this threshold had been met?


Is the “duty of care” approach in the draft Bill effective?


SWGfL has long highlighted the experiences of duty of care that was applied to statutory organisations via the Childrens Act 2004 and that there is significant merit and benefit in learning from this experience.  The impact of this is clear to see in any school – the sense of safeguarding children is typically obvious and palpable. 


Determining Harm


As part of this, the process for determining harm is derived through Child Safeguarding Practice Reviews[4] (previously Serious Case Reviews) that exist “to identify improvements to be made to safeguard and promote the welfare of children”. 


With experience of operating these processes over 17 years, “the purpose of reviews of serious child safeguarding cases, at both local and national level, is to identify improvements to be made to safeguard and promote the welfare of children. Learning is relevant locally, but it has a wider importance for all practitioners working with children and families and for the government and policymakers. Understanding whether there are systemic issues, and whether and how policy and practice need to change, is critical to the system being dynamic and self-improving.”


“Where a local authority in England knows or suspects that a child has been abused or neglected, the local authority must notify the Child Safeguarding Practice Review Panel if –

a)      the child dies or is seriously harmed in the local authority’s area, or

b)      while normally resident in the local authority’s area, the child dies or is seriously harmed outside England”


SWGfL suggests that the Online Safety Bill should clearly define the process for a similar reflectively evaluative process to operate alongside parallel investigates and processes (e.g. data regulation etc.)

Senior Manager Accountability

SWGfL supports the inclusion of senior manager accountability.  As an independent charity, owned by 14 Local Authorities, SWGfL recognises the impact of accountability of senior managers introduced by the Childrens Act 2004.  As an example the statutory obligations of Directors of Children’s Services[5] is clearly defined and arguably ensured that adequate safeguarding resources and provisions were introduced.  Whilst appreciating that commercial platforms operate in a different environment with different drivers, SWGfL suggests that having identifiable individuals with specific accountabilities to prevent harm to users (particularly vulnerable users), and alongside financial penalties, will ensure change occurs.




Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content?   What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?


Summary response: SWGfL considers that defining the process for determining harm is more important than the definition of online harms.


As previously highlighted, it would be important to learn from the experience of Child Safeguarding Practice Reviews as a process of retrospectively determining harm to make recommendations to learn and avoid for the future. 


Whilst systems and process are most important, it is still important to provide a resemblance and frame of harmful content to aide illustration. 


Harmful content is anything online which causes a person distress or harm.  This encompasses a huge amount of content and can be very subjective depending on the viewer; what may be harmful to one person might not be considered an issue by someone else. 


SWGfL has applied significant resource over the last 10 years and as part of its contribution and obligation to the UK Safer Internet Centre, including operating the ‘Professionals Online Safety Helpline, who have played a vital role  in identifying and challenging online trends specifically targeting women and girls such as ‘baited pages’ .  Of particular relevance here is the work involved in establishing and operating ReportHarmfulContent and includes the definitions of harmful content.  It is important to note that ReportHarmfulContent relates to content that is legal but still harmful. 


ReportHarmfulContent has identified the following eight types of online legal but harmful content:



Why these eight?

We studied the community guidelines of several different platforms and these areas of content are likely to violate terms. Also, based on our previous experience running two helplines, The Professionals Online Safety Helpline and The Revenge Porn Helpline, we know we can offer further specialist advice and support in these areas.


Clearly this classification of legal but harmful content extends beyond classifications of illegal harms (child sexual abuse, terrorist and online hate).


The advantage of defining harm is helpful to add clarity and transparency; however interpretation of harms can be subjective; what may be harmful to one person might not be considered an issue by someone else. In addition, SWGfL continues to see new harms emerge, or existing harms evolve, for example the significant increase in extortion cases received by the Revenge Porn Helpline received during Covid restrictions.  Any definition of harm will need to be continually reviewed and refined. 


SWGfL also considers standards an important element.  As an example since 2016, and as a part of the UK Safer Internet Centre, SWGfL has defined Appropriate Filtering and Monitoring | Safer Internet Centre for schools and colleges. 


Schools (and registered childcare providers) in England and Wales are required “to ensure children are safe from terrorist and extremist material when accessing the internet in school, including by establishing appropriate levels of filtering[6]".


Furthermore, it expects that they “assess the risk of [their] children being drawn into terrorism, including support for extremist ideas that are part of terrorist ideology”.  There are a number of self review systems (e.g. www.360safe.org.uk) that will support a school in assessing their wider online safety policy and practice.


Department for Education’s statutory guidance ‘Keeping Children Safe in Education[7]’ obliges schools and colleges in England to “ensure appropriate filters and appropriate monitoring systems are in place. Children should not be able to access harmful or inappropriate material from the school or colleges IT system” however, schools will need to “be careful that “over blocking” does not lead to unreasonable restrictions as to what children can be taught with regards to online teaching and safeguarding.”

Included within the Scottish Government national action plan on internet safety[8], schools in Scotland are expected to “have policies in place relating to the use of IT and to use filtering as a means of restricting access to harmful content.”

UK Safer Internet Centre published these definitions to support schools and providers in determining what should be considered as ‘appropriate’ filtering and monitoring. 




Are the media literacy duties given to Ofcom in the draft Bill sufficient?


SWGfL have long respected the media literacy role Ofcom have had.  SWGfL also recognises the vital importance of media literacy and that all opportunities should be taken to improve the media literacy of its users.  Looking at international comparisons of children’s media literacy, it would appear that the UK has much progress to make.  First published in 2020, DQ Institute Child Safety Index concluded that, of 30 countries, the UK ranked 23rd for Digital Competency.


There are many examples of innovative approaches to this.  Over the past few years and in relation to children’s media literacy, SWGfL has invested significant resource in considering the landscape and progress over the last decade.  Writing in an article in 2017, Ken Corish concluded that

The “digital natives; digital immigrants” postulate is a myth; it died from the moment it attempted to describe young people’s attitudes to online technology. Behaviour and technology have both moved on and so should our thinking.

As our children learn to make their way in the world, we provide environments where they can learn to take risks in a managed and supported way; we encourage risk to allow children to fail constructively, whether that is offering answers in class or abseiling down a rock face for the first time. There are mechanisms to support, educate, improve and intervene on the rare occasions that lead to harm. For the most part, these educative experiences are built on prior knowledge with direction and progression.

The behaviours we see emerging from the online lives of young people are for the most part indigenous and a product of the environment in which they find themselves and historically have had little or no guidance or intervention that affects change.

Change that empowers; change that builds resilience to harm; change that creates a culture that migrates naturally towards the positive rather than the transient, easy or unempathetic.

The legacy messages around online safety may satisfy our obligations to teach in this area but there is little evidence that they have affected any real cultural change. Children are good at barking back the messages you have covered in the lessons but evidence suggests it doesn’t change things. Most are borne from a negative philosophy:

They are messages that don’t even resonate with us let alone children and young people swimming in this online ocean every day of their lives. They were of a time; they require more depth and sophistication if they are to engender the right conversations to engender positive outcomes.”


SWGfL is critical of those who solely employ scare stories or shock tactics to educate about online harms.  Presumably the premise of this approach is that if they can recognise what harm looks like they can avoid it.  This does not work in isolation.  Using the parallel of driving cars, learner drivers are not merely sat down and shown films of car crashes to equip them to drive a car. 


SWGfL ProjectEVOLVE - Education for a Connected World Resources initiative built on this thinking and perspective.  Articulating and modelling age appropriate digital skills as well as enabling teachers (and parents) to better understand and evaluate the knowledge and understanding of their children rather than just applying programmes and resources.


There are significant opportunities for VSPs to support existing awareness opportunities to raise awareness, for example Safer Internet Day 2021 | Safer Internet Centre.  In 2021, Safer Internet Day reached 51% of UK children aged 8-17 as well as 38% of parents. Safer Internet Day 2021 Impact Report | Safer Internet Centre


Whilst SWGfL very much supports Ofcom’s approach to media literacy, given education and child protection is a devolved matter, it would be helpful to understand how the Online Safety Bill powers and Ofcom’s role will integrate and complement with the objectives and efforts of the four national education and curriculum with regards media literacy and also alongside the Online Media Literacy Strategy.


20 September 2021



[1] Gunning Fog Index (gunning-fog-index.com)

[2] Safer Internet Centres | Shaping Europe’s digital future (europa.eu)

[3] Review of sexual abuse in schools and colleges - GOV.UK (www.gov.uk)

[4] Working together to safeguard children - GOV.UK (www.gov.uk)

[5] DFE stat guidance template (publishing.service.gov.uk)

[6] Revised Prevent duty guidance: for England and Wales - GOV.UK (www.gov.uk)

[7] Keeping children safe in education - GOV.UK (www.gov.uk)

[8] Internet safety for children and young people: national action plan - gov.scot (www.gov.scot)