Written Evidence submitted by Vodafone UK (OSB0015)

 

Introduction

 

With an increased reliance on the internet, especially highlighted during the pandemic, the risks posed by the online world have equally grown and continue to evolve. This has been particularly the case when it comes to the potential use of the internet to share illegal, harmful and misinformed content, to harass other users and to harm vulnerable people, children and young people. 

 

Vodafone has been at the forefront of keeping our customers safe online for many years. We do this in a number of ways. We have robust parental controls in place to protect children from seeing inappropriate content; we are members of the UK Council for Internet Safety and the Internet Watch Foundation; and we publish the widely respected Digital Parenting Magazine aimed at supporting families to be safe online.

 

We welcome the draft Online Safety Bill and the opportunity to outline our position to the pre-legislative scrutiny committee. We are keen to work closely with Government, Ofcom and industry to deliver on the shared ambition to make the UK the safest place in the world to be online. We also support and agree with the points raised in the response submitted by MobileUK. 

 

Scope and Regulator

 

We fully support the key place of regulation in protecting people from online harms. However, since the debate about internet safety began, much of the burden of that regulation has fallen on those who are running the networks over which internet services are provided with measures such as network-level blocking often being turned to as preferred solutions.

 

It has been recognised for some time that those companies providing the platforms on which content is widely shared and consumed also have a crucial role to play. We believe the proposed regulation strikes a balance on services in scope and therefore support the proposals outlined in the draft bill.

 

The UK has already taken an important step in introducing regulation in this space by establishing the Digital Markets Unit at the Competition and Markets Authority. The time is right for a robust model of intervention and therefore we welcome the appointment of Ofcom as the regulator for Online Harms. We believe it is best placed given its existing experience in the digital and broadcasting sectors and provides the opportunity for joined up thinking across these areas.

 

It is imperative that once established, Ofcom is kept strictly independent from Government especially given the impact of the legislation on free expression. We agree with concerns raised by other organisations on the proposals in the draft bill which give the Secretary of State powers to make changes to the content code of conduct as ultimately this undermines the process and statutory independence of Ofcom.

 

Duty of Care

 

The duty of care envisioned in the draft Online Harms bill for providers of user-generated content services and search engines has a number of advantages. Firstly, it will impose a new culture of responsibility on those online platforms for the safety and wellbeing of their users, borrowing heavily from the application of a duty of care in the context of health and safety legislation. Whereas to date, social media platforms have been shielded from liability for illegality on their services – and indeed actively disincentivised from discovering illegal material for fear of attaining actual knowledge and thereby liability – under the new regime those services will have an active duty to assess and mitigate risks for users of their services, in particular children. Another key advantage of the duty of care is its focus on systemic issues and platform design as opposed to attempting to define and proscribe specific categories of illegal content. By imposing a duty of care for digital services, the regulation should in theory result in those platforms being designed in a way that in the future does not artificially promote, or even better actively suppresses, illegal and harmful material, without engaging those platforms in fraught questions around specific categories of content or individual posts.

 

The duty of care does have some drawbacks however: firstly, we are concerned that there isn’t effective oversight and enforcement of these new powers. The creation of codes of conduct to assist providers in complying with the duty of care presents both a risk and an opportunity in this context. The main risk is that services providers are able to co-opt the drafting of these codes in a way that weakens their impact and does not result in a real improvement over the current self-regulatory approach. The codes will also need to be regularly reviewed and updated to accommodate for the development of new services and features available to platform users.

 

Another deficiency of the duty of care is in relation to user-to-user conduct. While it is reasonable to expect certain duties on behalf of platforms towards their users, it is less certain what duties those platforms should have in relation to the behaviour of users of those platforms towards each other. This is particularly relevant in the context of harmful and illegal acts of users such as cyberbullying, hate speech and racist harassment. The regulator will need to establish clear guidelines for online platforms in this regard, as it is not immediately clear what responsibilities should pertain to the direct interaction between users, for example should the platform block direct interactions that contain certain flagged terms or media?

 

Content moderation

 

To a large extent we believe that the Bill delivers on the intention to focus on systems and processes and is an effective approach for moderating content. We agree that the focus on systemic issues and platform design is the right approach when it comes to content moderation, rather than attempting to define, categorise and proscribe all types of illegal material. We especially welcome the move towards risk assessment and mitigation, including the requirement to ensure that services subject to the duty of care are developed with safety-by-design as a core principle. This is the best way to ensure that illegal and harmful is not only removed once flagged, but not disseminated to a wide audience in the first place. We also welcome the focus on algorithmic recommender systems, that are important drivers of attention towards specific forms of content. It is vital that illegal and harmful material is not included in such recommender systems, even where it is known by the system to drive engagement and interaction as this can often result in wider dissemination of this material than would otherwise be possible under an organic/neutral model, the creation of social media filter bubbles and perpetuation of alarmist/extremist messaging and ideology.

 

International comparisons

 

Comparing the draft legislation to the Digital Services Act, we welcome the action taken to go further on scope to put an obligation on category 1 service providers as opposed to limiting scope to just illegal content. We also support the exclusion of strict take down limits, unlike the Digital Service Act which has challenging time limits set, and instead has the focus on platform design and risk assessment. 

 

The only area we would highlight as a potential lesson to learn is the useful codes of conduct that do already exist on the international stage. For example, EU Code on Hate Speech, International Code on Antisemitism and Web4Good. We would like to see the UK Government following these to produce further codes of conduct on areas which fall within the Online Harms agenda but are not currently covered. 

 

Enforcement

 

We fully support the Digital Economy Act 2017 and the ability of an internet access service provider to apply content filters to protect people from harmful content and we will continue to comply with our regulatory responsibilities in this regard. As an active member of the Internet Watch Foundation, we have for several years worked closely with them on removing child sexual abuse content through their blocking list. In April 2021, we recorded over 900,000 hits against child sexual abuse URLs across our mobile and fixed networks.

 

We welcome the proposed approach in the draft Online Safety Bill in which blocking is looked to as a last resort and requires a court order. However, increasingly we are seeing emerging technologies and in particular the growing use of encryption make the job of blocking at any level, including at a network level, very difficult and in some cases render it impossible. We are seeing more platforms moving communications to this model, which in turns represents a trend that networks are finding hard to address. Therefore, blocking at a network level can only ever achieve a certain amount and is gradually becoming more difficult.

 

It also needs to be recognised that ISPs are unable to effectively identify and remove individual pieces of content but instead have to resort to blunt blocking of IP addresses in order to comply. Such an approach means that not only is the illegitimate content blocked, but also a host of legitimate legal content and activity, amounting to an ineffective and disproportionate response to a “take down” request relating to a specific piece of content.

 

We already have an obligation to block a site by court order for intellectual property infringement. Following the Cartier court case in January 2018, intellectual property and rights owners will be required to pay for the cost of ISP blocking when court orders are imposed. Given the resource needed to set up processes to support higher numbers of court orders but also for implementation, costs to block a site under court order should be covered by the regulator or the company in breach via the regulator.

 

Financial scams

 

We recognise the pressure Government is coming under to expand the proposals on fraud to include financial harms and scams. In principle we would be supportive of the bill including these, in particular as it would demonstrate further action taken by the regulator to level the regulatory playing field between the digital platforms and the other players in the digital ecosystem, for example mobile operators. Our only concern with this is that the online harms sections of this legislation are urgently needed on the statute book and by adding more to the scope could cause further delay.

 

As an industry, we have been working extensively over recent years to tackle fraud on our networks, in particular SMS scams. We work closely across the mobile, banking and finance industries, along with the National Cyber Security Centre, to develop solutions to prevent fraudsters sending scam text messages. We are also actively engaged with the Home Office on a sector fraud charter and further action in this space. Therefore, having the platforms also play a role in preventing financial harms and scams will only help bolster measures taken by players to minimise consumer harm.

 

 

September 2021