Written evidence submitted by Professor Damian Tambini, Distinguished Policy Fellow and Associate Professor at London School of Economics and Political Science (OSB006)

 

The Online Safety Bill: Bridging the Legitimacy Gap

 

Summary of Proposals for Amendments:

              Enhanced protection for independence of Ofcom and self-regulatory bodies.

              A permanent Media User Group for Ofcom with real powers

              Extend Ofcom’s consultation duties

              Reduce government interference in communications regulation –by reducing ministerial powers to direct Ofcom or influence codes of conduct.

              Self-regulation of harmful algorithms and design, not only of ‘harmful content’.

              Strengthen the role of the proposed committee on disinformation.

              Reduce overall complexity of the framework and ensure citizens can understand it.

 

Introduction

The Online Safety Bill will impose a ‘duty of care’ on social media and search platforms: an obligation to protect their users in the United Kingdom from illegal and harmful content and services. The creation of this new regulatory framework reflects a consensus that platform self-regulation has failed to deal not only with illegal content, but with a range of individual harms such as content harmful to children and content harmful to adults including hate speech, harassment, terrorism and child abuse content.

The scheme has been greeted with enthusiasm by some campaigners who see it as a world leading framework for a new paradigm of democratic social media accountability, and with concern by some free speech campaigners who see it as an attack on free expression and public deliberation.

This submission focuses on the following questions raised by the call for evidence:

I will focus in particular in the section on the Role of Ofcom and the following questions:

 

The Proposed Legislation

The scheme sets out a general duty to prevent harm, and then a process for the development of codes of conduct by Ofcom which will be the designated online safety regulator and responsible also for monitoring adherence to the codes. Slightly different duties apply to social media and search providers because social media host content and therefore have the potential both to remove and to downgrade content. In general, more onerous obligations apply to the larger platforms with the most stringent reserved for ‘category one’ services which will include the major platforms. Some services, such as messaging, intranets and journalism are excluded from the proposal. Some forms of content such as news and ‘democratically important content’ are also excluded. 

It is difficult to summarise the proposals as they are complex and vary by category of service, and many of them are yet to be defined in detailed codes of practice. It is worth noting that this complexity is a problem in itself problematic as any scheme for speech regulation needs to be widely trusted and widely understood.

The Bill includes a process for Ofcom to write codes setting out what is to be considered harmful content and expectations for reducing the impact of this content, for example by removing it from social media, or downranking it in search results. Service providers will have duties to conduct risk assessments, for example assessing whether their services are likely to be accessed by children, and whether there is ‘material risk of harm’ to children or adults depending on the markets of the services. The framework attempts to regulate in ways that will encourage platforms to think in general terms about safety-oriented design: operators will be obliged to design their services in ways that minimise the risk of harm to their users.

Enforcement powers include fines of up to £18m or 10% of annual global revenue (a potentially huge sum) if harms are not reduced after warning notices. Ultimately, senior managers will have criminal liability and there are also business disruption measures for extreme cases.

 

 

 

Concerns with the current draft

 

1. Too little independence for Ofcom, the codes and self-regulatory framework.

The new role extends Ofcom’s function in relation to new areas of speech regulation and thus increases the importance of genuine and perceived independence of the regulator. Despite this, there are a number of ways the Bill compromises independence by giving the minister the ability to call in codes and procedures for review.

The Bill should be revised to protect and enhance Ofcom independence. The Council of Europe has set out a number of standards on regulatory independence for media authorities. (The UK remains a member of the Council of Europe.) The Indireg study on effective audio-visual media services regulation set out five indicators of independence: the status and powers of the authority; its financial autonomy; the autonomy of decision-makers; the adequate provision of professionally qualified human resources; and the accountability and transparency of the authority. Ofcom is a highly respected regulator, but it is also the case that the previous government was criticised for appointing its advisors to senior roles, and there has been widespread media speculation recently about politicisation of senior appointments which would undermine public confidence in the regulator. The Bill should be amended to improve Ofcom independence from both Government and from regulatees on each of those five dimensions. (See below).

For example, the Bill delegates Ofcom to take charge of developing codes of conduct for both search and social media. In these and other roles it needs to be independent and seen to be independent. Secretary of State powers to approve codes of practice (s32, s33) or to give directions to Ofcom (s111) need to be narrowed and specified or removed entirely and independence guaranteed on the face of the Bill. Part 6 of the bill gives too much discretion to the Secretary of State that should instead be exercised by Parliament. A better approach would be to involve citizens’ juries or panels and build genuine citizen involvement into the regulatory framework. Section 113(1)a is just one example of a power which is closer to authoritarian than to liberal democratic standards even with the safeguards at sections 113(1-8).  Exercise of these functions must be separate from the executive.

 

2. Blurring harmful and illegal categories, and a loose definition of harm.

Harmful and illegal content require differentiated approaches[1] and to an extent this has been reflected in the different regulatory structures for illegal and harmful content under the Bill. Illegal content is a matter for police or regulatory enforcement and the Bill sets out clear rules and proposals for enhanced regulatory oversight. Harmful content is a matter for transparency, user empowerment and consumer choice which is reflected in the reporting and redress duties in section 24. It is important to make abundantly clear that procedures are separate and it is clear to users on what basis content is regulated. What we want to incentivise is a societal and industry debate about where harm is occurring and what are the ethical responsibilities of all actors in reducing harm. This is how in the long-term society will make our online world more caring. There is a big difference between content that Parliament has decided is illegal and harmful and requires platforms to protect users from it at all times (such as racist hate speech) and content that breaches the rules of a given platform and meets the requirements of s46.

The definition of harm is currently based on an agreed standard of a person of ordinary sensibilities. It is logical that such standard of broad societal acceptability applies to Category 1 services, which have a dominant position and enjoy network effects and can be considered socially essential services. But it should also be made clear that users should have a right to join multiple services that meet a lower standard of niche acceptability, and switch between them. Other categories of services should be subject to their own definition of harm expressed through enforced transparency and trade description standards and enforcement of their own moderation guidelines. This will ensure that the social expression and diversity rights (those that prefer a safe environment with trigger warnings and those who prefer robust freedom of exchange) can be provided for in a market. In the long term we should aim for a plurality of standards and ensure that users have sufficient information and flexibility to switch between platforms on the basis of these standards, secure in standards of enforcement.

The duty of care should also permit codes and regulators to regulate not only content standards but recommendation standards. A piece of content may not be harmful in isolation, but an algorithm designed to repeatedly expose vulnerable young people to such content could be lethal. The problem may be that the business model itself – rather than this or that instance of content is harmful. Codes should develop, through long term public debate and involvement with users, standards on harmful algorithms, or more widely harmful design, as well as ‘harmful content’.

 

3. Too much ministerial involvement in speech regulation.

This pertains to, for example, ministerial roles in codes of conduct and the definition of harm. The approach of the Bill is, all too often, to say that whereas in the first instance defining harm will be done within the Bill, regulatory discretion is a matter for the platforms, or if not Ofcom, but that if it is too difficult, then the minister can decide. (See for example in s.46(2)(b) (i), and s47). This is the wrong approach because it is inappropriate for a minister to assume such censorship powers. The ultimate discretion should be given to the independent regulator with evidence requirements and potential for judicial review.

Definitions of journalistic content is an example (s14) where it is the platform that has to apply the rather loose test of what is ‘protected journalism’. So the platform has to decide (s14.8) if the content is news publisher content, ‘regulated’ content. (s39, 40) or exempted messaging. We can see many examples of where Whatsapp is used to distribute news and where it would be quite easy for publishers to meet the s40(2) requirements. It is also true that there is plenty of ministerial discretion on similar legislation such as the Communications Act (s59, s52a, s24, s29), but these sweeping new powers go beyond regulated broadcasting into a potentially huge realm of public-private communication

The real challenge – and this is the creative task for media governance in general – is how best to involve the public. Ministerial intervention needs to be replaced with civil society participation which means institutions like citizens’ juries, deliberation, and citizens’ assemblies. Legislation needs to give these deliberative assemblies a role and the power to decide, for example, whether content is democratically important or worthy of prominence.

 

4. Lack of concern with social harms such as disinformation. 

It has been widely commented that ‘social harms’ like harms to democracy through disinformation are not dealt with in the OSB. A lot will depend on what the definition of harm is, and this may vary. Disinformation, effectively, is parked with a committee under s98 of the Bill. A committee will be established to advise Ofcom on what providers need to do about disinformation and misinformation. Because it will not report for 18 months after the legislation is passed (and then annually thereafter) and because it will be advisory, this structure is not likely to result in great change in the short term. In the light of increased government and security attention to the problem of mis- and dis-information this may not be regarded as a proportionate response to short and medium-term risks given for example potential future referenda on the existence of the UK. It would be short-sighted not to attempt to pre-empt new disinformation wars and much more can be done without creating a ‘Ministry of truth’ structure. The committee should be given responsibility to coordinate:

 

5. Online harms needs to be linked to antitrust and market shaping. 

One problem that has been pointed out is that costs of compliance with this regime might lock in the biggest players even more or that further obligations might encourage some providers to exit the market. Whilst it is far from clear as to how this is going to pan out, we nevertheless need mechanisms to continue to review the situation and monitor the shaping of the market, rates of switching, contestability etc.  Regulators and the government are working on this, but it needs to be better integrated into the Bill.  Ofcom needs to be given more powers to draw together monitoring on these points and lead the Digital Regulation Cooperation Forum. The Digital Markets Unit should be empowered to use structural separation remedies.

Therefore I strongly support the proposals of the House of Lords Communications Committee from August 2021[2] which proposes that:

“358. The online safety regime must not entrench the market power of the largest platforms by increasing barriers to entry for competitors. Ultimately, this would harm consumers. The Government should require Ofcom to give due consideration to—and report on the impact on—competition in its implementation of the regime. Ofcom should work closely with the Digital Markets Unit in this area.” The overall approach should be a graduated and differentiated approach that ensures that smaller players and platforms are freer of costly obligations.

 

The Underlying Problem: The Legitimacy Gap

Any form of regulation needs legitimacy: rules should clearly represent the public interest and their origin and basis must be clear to all users. In the case of regulation of speech this is absolutely critical[3]. In the current draft of the Bill, the regulator is insufficiently independent from government; the minister has a wide range of powers to interfere with the process, effectively to approve or block codes of conduct and to give directions to Ofcom. In general, the process is complex and obscure to media users and citizens, which will foster mistrust. Correcting this will require some far-reaching but achievable amendments to the Bill.

Some of these concerns can be addressed quite easily by amending the Bill to enshrine in law significant new protections for Ofcom’s independence that will make clear that Ofcom and the process of defining what is harm, or what is democratically important journalism, is separate from government and perceived to be. This should aim to replicate the standards of the Council of Europe for example by protecting the financial and operational independence of the regulator[4]. There is no reason that the UK should set a standard lower than that of the EU Digital Services Act (Article 39.2) which states that “when carrying out their tasks and exercising their powers in accordance with this Regulation, the Digital Services Coordinators shall act with complete independence. They shall remain free from any external influence, whether direct or indirect, and shall neither seek nor take instructions from any other public authority or any private party.”[5]

However, protecting Ofcom and the codes from government interference is only part of the problem, because Ofcom as a technical, evidence-based regulator will also face difficulty in assessing what might be subjective judgement calls on what constitutes harm or quality journalism. Ultimately, definitions of harm and of quality journalism that derive from a negotiation between Silicon Valley companies and unelected bureaucrats will lack a source of legitimacy. Any attempt to write them into codes or reach adjudications on those codes will be criticised and undermined. Interference of ministers in such judgements will rightly be regarded as a form of censorship.

The key concern of critics of the OSB is its discretion in defining both content that is harmful (to adults and to children) and content that is beneficial – e.g. news or democratically important content that is to be exempted or granted special privileges. Index on Censorship, for example, points out that the usual procedure for justified restriction of freedom of expression is that it is declared unlawful with the full oversight of Parliament (i.e. In law) and that decisions about censorship are executed by law. They argue that regulators have no place in deciding what may be deemed ‘harmful’ but not illegal.

Such arguments are powerful but they don’t really reflect the reality that private actors are already making multiple, often automated judgements about what is harmful and what is deserving of wider distribution, but that such decisions are private. Because some of these censorship decisions are made by dominant platforms, they can have censorship effects. For supporters of duty of care approaches, what the duty of care framework does is make these private decisions – which effectively determine standards for speech online – public and accountable.

 

Filling the Legitimacy Gap: institutional reforms

The key structural problem with the online safety proposals then is that decisions about definitions of harmful content that would form the basis of decisions to block, and of journalistic and democratically important content that would be exempt, lack a source of legitimacy. They also merge, in non-transparent ways, government and private forms of censorship by compromising the principle that media institutions and their regulators should be autonomous from the state.

To put government in charge of this new structure of censorship would undermine centuries long traditions of press and media freedom and undermine trust in democracy. Control of such judgements is subject to endemic conflict of interest and suspicion; just as governments should not control and chill public opinion that it is accountable to, platforms are not trusted because their commercial incentives may conflict with duties of care, or themselves lead to censorship.

One way of addressing these concerns and building a new institution to cement trust in the decisions of an independent regulator is to firstly strengthen the reality and perception that the regulators (both Ofcom and the self-regulatory structures established by platforms) are independent, and secondly establish a permanent media user panel for Ofcom. This would be formed on the model of a ‘citizens assembly’ which can be seeded within Ofcom by reforming the Consumer Panel into a more genuinely representative forum: a Media User Panel.

 

 

Independence of Ofcom and Codes of Conduct

The independence of Ofcom can be strengthened by a full review and audit of the Secretary of State powers in the legislation, and appropriate limitations on those powers. For example under Section 33 of the draft Bill the SOS has powers to direct Ofcom to modify a code of practice to ensure that the code of practice reflets government policy. This conflicts with the longstanding principle under the ECHR and the UN Convention that restrictions on freedom of expression should be required by law in order to facilitate foreseeability/ legal certainty and to ensure Parliamentary (cross party) oversight of such changes, and it also compromises independence of the regulatory authority.

 

Ofcom Internal Reforms

This structure will help guard against the danger and the perception of government interference and help provide the legitimacy on which this new form of accountability will rely. Ofcom will help to protect its decisions from legal challenge by delegating them to this panel.

 

Conclusion

The central issue is legitimacy. In other words: How is it possible to operate a framework for speech regulation that is not subject to capture by narrow social, political or economic interests (rather than the public interest) and widely understood as such? How is it possible for a framework of rules to involve politicians and private actors and also enjoy the trust of the public?

The current draft of the bill is not based on a coherent strategy to address this central problem and arguably poses new problems by weakening regulatory independence and encouraging ministerial interference.

The legislation and its implementation must abide by some general principles of transparency, procedural fairness and above all independence of regulatory authorities in order to be trusted.[6] They must also derive legitimacy from civil society. In their current form they don’t because they are insufficiently independent and there is too much ministerial interference, and because the platforms themselves have too much power to shape codes and definitions of harm, and it is not clear whether they will do so in a context when consumers will be able to switch between platforms in a competitive environment.

This Bill is being introduced at a time of considerable uncertainty. For example, less than a year ago, we did not know to what extent vaccine disinformation would undermine the virus response. If it had resulted in lower rollout (as some believe is the case in parts of the US) we would be in a difficult situation, and arguably contemplating more censorious approaches for example to vaccine disinformation. It did not, and the public information message got through. This does not however mean that future hostile or anti-social messages will not thrive in the future as they have done in the past. What is clear is that we need a media system with ethics wired in; we need filters that work and are trusted. The Bill in its current form threatens to harm trust and legitimacy by creating an opaque set of relationships involving politicians and platforms. But with the right improvements it could empower a bottom-up, society-led response to online harm and disinformation

 

 

September 2021


[1] See https://www.fljs.org/reducing-online-harms-through-differentiated-duty-care-response-online-harms-white-paper

 

[2] House of Lords Communications and Digital Committee 2021 Free For All? Freedom of Expression in the Digital Age. https://committees.parliament.uk/work/745/freedom-of-expression-online/publications/

[3] I elaborate these arguments in the book: Media Freedom published by Polity Press in September 2021.

[4] Existing standards for broadcasting need to be updated to apply to new regulators: See Committee of Ministers, Recommendation Rec (2000)23 on the independence and functions of regulatory

authorities for the broadcasting sector, 20 December 2000, see also: https://rm.coe.int/the-independence-of-media-regulatory-authorities-in-europe/168097e504

 

[5] EU Digital Services Act 2020. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

 

[6] See the Council of Europe Principles on Media and Communication Governance here.