Written evidence submitted by The Coalition for A Digital Economy (COADEC) (OSB0029)

 

About Coadec

 

  1. The Coalition for a Digital Economy (Coadec) is the policy voice of UK tech startups and scale ups in Westminster and Whitehall. Since 2010, we have worked to engage on behalf of tech startups in public policy debates in the UK, across a range of priority issues including access to finance, immigration and skills, and technology regulation. We work directly with the Government across a range of issues; our Executive Director sits on the Digital Economy Council, Telecoms and Tech Trade Advisory Group, and the Jet Zero Council.
     

Coadec’s view of the Draft Online Safety Bill

 

  1. Coadec and its community welcomes the attempt to create a framework to improve safety online. However, the approach put forward in the Draft Online Safety Bill is too wide-ranging, convoluted, unclear and will damage the UK’s position as a leading digital economy. Additionally, it is impossible to evaluate the effectiveness, or otherwise, of the proposed regime while existing, significant, uncertainties remain around what content is in scope and what services are in scope. These uncertainties mean that the current framework is impossible to effectively implement and impossible for businesses to effectively comply. If passed in its current form, the Bill will harm digital innovation in the UK.

Will the proposed legislation effectively deliver the policy aim of making the UK the safest place to be online?

  1. Without significant restructuring and clarification, the proposed legislation is unlikely to achieve the aim of making the UK ‘the safest place to be online’. The approach currently put forward in the Draft Online Safety Bill is convoluted, unclear and will damage the UK’s position as a leading digital economy.             
     
  2. It is impossible to properly evaluate the effectiveness of the proposed online safety regime while existing uncertainties remain around what content is in scope and what services are in scope. These uncertainties underpin the overwhelming majority of the proposed framework and if these uncertainties exist in the final Bill then it will be impossible for Ofcom to effectively implement, and for businesses to effectively comply. This means the Bill will not achieve the aims set out by the Government.             
     
  3. Without significant clarification or a narrowing of scope, the new regime has the potential to do more harm than good, with unclear definitions and additional duties requiring platforms to take a contradictory approach.

Will the proposed legislation help to deliver the policy aim of using digital technologies and services to support the UK’s economic growth? Will it support a more inclusive, competitive, and innovative future digital economy?

  1. The proposed legislation will not help to support the UK’s economic growth, neither will the legislation help to deliver a more inclusive, competitive and innovative digital economy in the UK. As it stands there is likely to be a chilling impact on digital competition. We expect large incumbents able to comply easily, but startups to be left facing huge costs and new entrants deterred from entering the UK market altogether.             
     
  2. The Impact Assessment for the Draft Online Safety Bill fails to adequately address competition issues. The Impact Assessment notes that the Germany NetzDG did not have any impact on market competition, but NetzDG has explicit carve outs for smaller firms with smaller reach, something which the Draft Online Safety Bill does not.[1] The Impact Assessment also notes that the framework has graduated regulatory risk according to company size, but this is impossible to quantify or evaluate without setting defined category thresholds and conducting another impact assessment of the impact of defined thresholds. Without this there is no way of knowing whether the framework is either proportionate or able to assess the true impact of the legislation on competition.
     
  3. The Impact Assessment also notes the significant cost of new regulation on businesses operating in the UK - around £1.7 billion for only illegal harms and to safeguard children. Even this is likely to be an underestimate, with the assessment failing to properly account for the cost of implementing new technologies. The Impact Assessment itself fails to make an estimate of some costs acknowledging in many cases it would be ‘unclear... what kind of systems they would employ’. This needs to be looked at again to ascertain the true cost to businesses, of all sizes, and to work out how detrimental these costs may be to innovation and competition.

Are children effectively protected from harmful activity and content under the measures proposed in the draft Bill?

Are the definitions in the draft Bill suitable for service providers to accurately identify and reduce the presence of legal but harmful content, whilst preserving the presence of legitimate content?

  1. The Draft Bill fails to properly define content which is legal but harmful to children or to adults. The Draft Bill currently places the responsibility for making this determination on to service providers. This creates an unworkable and uneven playing field between services and for users. Different services, each with unique functionalities, will ultimately adopt different standards for what content is harmful and what is acceptable on their platforms. There are obvious difficulties in setting out a definition of harm, and in recognition of this the approach set out in the Draft Bill represents the Government’s third attempt at a workable definition.             
     
  2. The definition set out within the Draft Bill requires that service providers set out their own subjective assessments of ‘a significant adverse physical or psychological impact’ for an adult, or a child, with ‘ordinary sensibilities’. Subjective assessments means that different platforms will implement the new rules in different ways. This represents a serious problem for implementation, especially given the sensitivities involved. This makes the latest attempt at defining harm a clear step backwards from the initial categories of defined harms which were floated in the original White Paper.[2] This approach also further entrenches the position of large incumbents, with startups far less able to dedicate resources to consider these issues and make these determinations, for example, a small startup will be focussed on building out their businesses and would clearly be unable to dedicate qualified thought to make decisions on subjective harms.
     
  3. There are distinct risks to pressing ahead with such an approach. For example, this could give rise to overregulation of content on some platforms leading to the censoring of acceptable content, including content which is protected by the Draft Bill such as journalistic content or content of democratic importance.

 

  1. To be workable the Bill needs to set out, in explicit terms, which content is and is not acceptable. The easiest way to go about this would be to adopt a narrow and accepted definition that is not subjective or open to interpretation. The only workable definition of harm is therefore likely to be content which is already clearly defined as illegal. This would also remove the risk of creating a new category of speech and content that is, effectively, illegal online but legal offline.

Is the “duty of care” approach in the draft Bill effective?

  1. The provisions set out within the Draft Bill can only be viewed as a whole. The ‘duty of care’ approach fails to set out, in clear terms, what is expected of platforms. This is not because a ‘duty of care’ approach is itself unworkable, but because many of the requirements within the duty are unclear. Key definitions underpinning these duties are vague and subjective, particularly on content which is legal but harmful. It is not possible to create an enforceable duty of care without clear and defined harms to underpin that duty.             
     
  2. It is right however that the ‘duty of care’ approach within the Draft Bill starts to create a tiered approach to ensure that higher risk services face proportionate regulatory burdens. More consideration needs to be given to the duties required of startups, who may fall into the scope of the new regime, but who may not have the capacity to comply with stringent and costly measures. This could be avoided by ensuring that startups are explicitly out of scope.

Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content? What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?

  1. The Draft Bill is right to let service providers make decisions on the correct use of 'proportionate systems and processes’ to minimise the presence of content, to minimise how long that content is present and to minimise the dissemination of such content. Different services function in different ways, processing varying amounts and types of content and with different user bases.

 

  1. For this reason, it is not practicable to create and enforce a one-size-fits-all approach or to be prescriptive as to what measures a service should consider implementing. For this reason, the online safety framework should steer clear of mandating the use of certain technologies, as these may be appropriate for certain companies, of a certain size, operating in certain sectors, but these solutions may not work across the wider digital environment and may actively serve to decrease competition.             
     
  2. The Bill should create a framework that encourages best-practice and does so without restricting competition. A process-led approach, letting platforms determine for themselves the best way to improve products and standards while keeping their users safe. This is the best way to promote innovation in the digital space. But the Bill also needs to offer companies clarity, and for this reason the framework should clearly set out what is expected for platforms between categories and across different types of service.

How does the draft Bill differ to online safety legislation in other countries (e.g. Australia, Canada, Germany, Ireland, and the EU Digital Services Act) and what lessons can be learnt?

 

  1. There are lessons that can be learned from international experience with other countries having already implemented similar legislation. Many of the frameworks being pursued internationally offer far more clarity on content that is to be regulated and which companies are to be in scope, and typically achieve this by setting out specific categories of unacceptable content.             
     
  2. On defining harmful content, several countries have opted for tighter, more enforceable definitions. This is the case, for example, in the German NetzDG approach, where content is in-scope if it is illegal.[3] This approach is echoed in the European Union’s Digital Services Act (DSA) which focuses primarily again on illegal content, with much less prescriptive rules for harmful content.[4] Other approaches such as in Australia’s Online Safety Bill or Ireland’s Online Safety and Media Regulations Bill more closely mirror the UK, but have clearer categories for in scope content, such as the sharing of intimate images without consent or material promoting self harm.[5] [6] Strict definitions of in-scope content, particularly making rules apply only to illegal content, or clearer content categorisation, can make online safety legislation more enforceable allowing businesses to effectively comply.             
     
  3. On setting out which companies are in scope, other nations have offered more clarity than the UK. For example, in Germany services are only in scope if there are more than two million registered users in the country. In the EU’s DSA, only platforms with 45 million monthly users face the most stringent regulations. The DSA also goes further, setting out specific requirements for different types of platform and it differentiates between intermediary services, hosting services, online platform services and very large online platforms. This is a tiered approach which the UK should look to adopt to make sure that firms have an appropriate level of regulation according to their size and risk, such as category 1 inclusion being limited only to the largest services with a statutory minimum number of users.

Does the proposed legislation represent a threat to freedom of expression, or are the protections for freedom of expression provided in the draft Bill sufficient?

The draft Bill specifically places a duty on providers to protect democratic content, and content of journalistic importance. What is your view of these measures and their likely effectiveness?

  1. There are substantial contradictions throughout the Draft Bill. This creates a framework which quite clearly represents a threat to freedom of expression. In-scope services are required both to monitor and moderate content, including legal and entirely subjective harms, but to do so with a ‘duty to protect rights to freedom of expression and privacy’. This approach is itself a contradiction. As what may be considered freedom of expression by one individual, may be considered harmful to or by another.

 

  1. The freedom of expression protections within the Draft Bill appear to be an afterthought, added only after the other contents of the legislation had been decided. Its inclusion creates a conflict at the heart of the online safety framework. This makes online platforms the arbiters of what is harmful, but also of what constitutes freedom of expression, the framework requires that platforms make these subjective assessments, but holds them liable if they get these subjective assessments “wrong”.             
     
  2. The Draft Bill acknowledges the importance of democratic and journalistic content to public discourse and open interaction online. Clearly these carve outs are only included within the Draft Bill because of the sheer weight of measures placed on platforms by the legislation mean that it is likely that such content would face a high chance of removal - but the inclusion of these carve outs serve only to make the framework more difficult to implement.
     
  3. Including these protections within the final Bill will mean that implementation will be far more difficult than if they were excluded entirely. But this difficulty largely arises from the subjectivity around the description of content that is harmful but which is otherwise legal, and making platforms the arbiters of this. The only way to remove this tension from the legislation is to require platforms only to remove content which is defined as illegal, taking out subjective assessments that contradict with additional carve outs.

The draft Bill sets a threshold for services to be designated as 'Category 1' services. What threshold would be suitable for this?

Are the distinctions between categories of services appropriate, and do they reliably reflect their ability to cause harm?

  1. The inclusion of Category 1, 2A and 2B designations within the Draft Bill is a step in the right direction and rightly acknowledges a different approach is needed for companies of a different size. However, despite these categories, the Draft Bill fails to set out a solid set of thresholds to determine categorisation, instead passing this power to the Secretary of State and Ofcom to decide at a later date. Additionally, it is possible for startups to move between category designations at short notice as they expand, this is again potentially damaging for competition as it acts as a disincentive to grow.

 

  1. It is impossible to judge the appropriateness, effectiveness and the proportionality of measures set out within the Draft Bill, when it is unclear what size of business will be brought under either the Category 1 or Category 2 umbrellas. For this reason strict criteria setting out thresholds for users and functionalities should be included when the final Bill is published.

 

  1. The categorisation framework is still too broad, and should be further separated. As the Draft Bill stands, there is little difference between what is required of category 1 and category 2B services. Services in both categories are required to take the same action on illegal content, and to comply with the same safety duties around services likely to be accessed by children. This is the case regardless of what will be enormous differences in scale of the operations between large and small firms. This will have a chilling effect on competition in digital markets and on the willingness of entrepreneurs to found businesses that may be in scope of the online safety regime. This will, in turn, protect the incumbents. The framework should reduce the burden on smaller services that pose little risk.


 

  1. It is important that the final Bill reflects a tiered approach and appreciates proportionality: smaller businesses should not be burdened with the same obligations as their larger counterparts which have more resources and enjoy larger economies of scale. For this reason, the legislation should see that only the very largest social media firms face the most burdensome requirements, as it is these firms that demonstrably have the greatest risk for users. Additionally, startup businesses should be given special recognition and be excluded from categorisation to avoid a negative impact on competition and innovation.

Will the regulatory approach in the Bill affect competition between different sizes and types of services?

  1. There is a very real risk that the proposed online safety framework will negatively impact startups as they are brought under the scope of legislation that was intended primarily to capture the largest social media firms. In particular, there is a risk that smaller firms will have to take on new burdensome compliance requirements, or implement new technological solutions to meet the requirements of the Bill.

 

  1. New technology to improve safety is expensive and very often creates its own risk by overlaying new layers onto products, requiring the collection of new data, or the involvement of a third party. There is a risk that mandating specific safety measures could be detrimental to competition, in particular damaging SMEs. SMEs, in complying with new safety requirements, are likely to pay disproportionately more to align themselves with measures designed for bigger services for whom such features are more appropriate.

 

  1. Recent research, commissioned by the Department for Digital, Culture, Media and Sport and completed by EY, looked at the measures that platforms with video-sharing capabilities take to protect users from harmful content online. The report found that smaller companies were spending over £45 per user compared to the biggest platforms which spent £0.25-50 per user.[7] This suggests that compliance costs are likely to be seriously difficult, if not impossible, to meet for SMEs. This could reduce competition, and could dissuade new business creation in the UK’s digital economy.             
     
  2. The Draft Online Safety Bill’s Impact Assessment fails to consider the impact on startups. It notes that ‘It is unclear what percentage of businesses would be required to adopt age assurance measures or what kind of systems they would employ’. It also pointed to the lack of clarity around the cost of implementing such technology.[8] Additionally, the Impact Assessment failed to describe the transition costs for small businesses on a cost-per-user, or as a percentage of revenue, while at the same time noting that 81% percent of in-scope businesses are likely to be microbusinesses. This oversight is likely to mean that the proposed framework could have a significant and disproportionately negative financial impact on startups with key metrics overlooked in the Impact Assessment.

 

  1. There have been growing calls to mandate the use of age assurance and age verification technologies, something which is currently not required in the Draft Bill. Requiring the use of such technologies on such a large scale is unproven. It could also only occur at significant cost, particularly to startups companies operating in the online space and would create a barrier to conducting operations in the UK. Collecting, verifying and storing sensitive user information creates significant risk for user privacy and security.

 

  1. There is a risk that, in mandating the implementation of new technologies, the Bill could place substantial financial obstacles to the emergence of new market entrants, further entrenching the bigger, higher risk, companies and severely limiting competition. Further, these requirements will make the UK a more expensive place to do business, making it less attractive to new investment on the international stage and hamstringing the UK’s world-class digital economy.

Are there any foreseeable problems that could arise if service providers increased their use of algorithms to fulfil their safety duties? How might the draft Bill address them?

  1. There are a number of foreseeable issues that may arise as platforms increase their use of algorithms to meet the safety duties laid out on the Draft Bill. The framework is wide-reaching, effectively requiring platforms to monitor, process and moderate all the content appearing in their services, effectively a general monitoring obligation. This is only possible by making use of automated processes that use algorithms to help facilitate and deliver on this expansive duty.             
     
  2. This obligation is wide-reaching and is only achievable through the use of automated processes facilitated by algorithms. This comes with considerable downsides. This is especially true when considering the other requirements within the Draft Bill, particularly with specific carve outs for content of democratic importance and journalistic content. But these duties are not standalone, and the safety duties of the Draft Bill, and the carve outs must also work together with requirements around freedom of expression.             
     
  3. Aligning these priorities is almost impossible for a human operator, let alone an algorithm which will have no understanding of subjective meanings of harm, or indeed the context behind the publishing of the content. Ultimately, the issues arising from algorithm use amplify many of the other issues with the framework, namely that it is hard to properly implement a safety framework that is, at its heart, subjective.

 

Is Ofcom suitable for and capable of undertaking the role proposed for it in the draft Bill?

Are Ofcom’s powers under the Bill proportionate, whilst remaining sufficient to allow it to carry out its regulatory role? Does Ofcom have sufficient resources to support these powers?

  1. Ofcom is the most suitable regulator to oversee, help implement, and enforce the proposed online safety framework in the UK. Ofcom already has experience working on harmful content online, having already built out some online safety capacity through its work as the regulator of video-sharing platforms since November 2020.[9]             
     
  2. Ofcom will be required to take on considerable short-term work which is required as part of the draft legislation. This work includes helping to prepare new codes of practice, helping to prepare a register of categories, carrying out risk assessments and other requirements connected to secondary legislation. The volume of this activity raises legitimate concerns about the ability of Ofcom to take on so much complex, specialised work in the time before the online safety framework comes into effect.             
     
  3. Ofcom should be given sufficient time and space to go about its work supporting the new regime, so that it can properly engage with industry and so that it can recruit the necessary talent. Especially so with the tech industry historically struggling to find individuals with the appropriate skills and expertise.[10] This may prove to be especially important if Ofcom is to advise on the technical processes which companies should adopt, or wants to comment on how services are run.             
     
  4. It is vital that Ofcom, as the new regulator, is willing and able to engage with platforms of all sizes, including with startups who will be caught up in the new regime - and who may have to comply with measures not designed with them in mind. This will be an important step to gain the trust of wider industry and can help Ofcom to issue the appropriate advice on the implementation of the framework.

How much influence will a) Parliament and b) The Secretary of State have on Ofcom, and is this appropriate?

  1. The Secretary of State currently has unprecedented power to direct and issue guidance to Ofcom, an independent regulator, and the Draft Bill should be amended to limit this. As it stands many of the provisions within the Bill, such as categorisation, and even the role of Ofcom, can be changed at the Secretary of State’s sole direction. For example this could see the Secretary of State bring previously exempted companies in scope.
     
  2. These wide-reaching powers create unnecessary regulatory uncertainty for businesses who already have to try to comply with a framework which is already lacking specifics. It should be the case that the UK’s online safety framework upholds the independence of the new regulator, and that the regulator, with the appropriate technical expertise, can be trusted to make the correct decisions without unaccountable political oversight that can be used to change the fundamentals of the new online safety framework at short notice and without adequate consultation.             

 

September 2021             
 


[1] ‘The Online Safety Bill, Impact Assessment’, Department for Digital, Culture, Media and Sport, April 2021

[2] ‘Online Harms White Paper’, Department for Digital, Culture, Media and Sport, April 2019

[3] Network Enforcement Act, Bundesministerium der Justiz und für Verbraucherschutz, July 2017

[4] Digital Services Act, European Commission, December 2020

[5] Online Safety Bill 2021, The Parliament of the Commonwealth of Australia, July 2021

[6] ‘Online Safety and Media Regulation Bill’, Gov.ie, January 2020

 

[7] ‘Understanding how platforms with video-sharing capabilities protect users from harmful content online’, EY, August 2021

[8] ‘The Online Safety Bill, Impact Assessment’, Department for Digital, Culture, Media and Sport, April 2021

[9] Video-sharing platform (VSP) regulation, Ofcom, June 2021

[10] Skilled worker visa: shortage occupations, UK Visas and Immigtation, April 2021