Written evidence submitted by Full Fact

 

 

DCMS Sub-Committee on Online Harms and Disinformation

The Government’s approach to tackling harmful online content outlined in its draft Online Safety Bill

Summary

        The online ‘harms’ to ‘safety' shift has seen the scope of the draft Bill reduced so that the problems of misinformation and disinformation will not be effectively addressed. This should be revisited so the Bill does tackle the harms to our society and democracy as set out in the government’s counter-disinformation strategy.

        The Bill should include election integrity and a UK Critical Election Incident Public Protocol should be established to depoliticise a key area where a general election may be vulnerable to interference and to defend electoral systems and processes.

        Censorship-by-proxy is a problem that needs to be addressed in the Bill, including the addition to the Bill of some form of reporting requirement for the government to publish details of all efforts it makes to influence internet company decisions.

        The powers given to the Secretary of State in the draft Bill are too extensive and need to be narrowed to ensure separation of roles and as part of the role of parliament being strengthened so as to increase its oversight.

        There are problems with the definition and process for determining what counts as “content that is harmful” and instead the legislation should clearly identify specific harms it seeks to address and the remedies required on them.

        The Bill should enable independent scrutiny of the algorithms of in-scope internet companies to verify algorithms and certify them in some way as being low risk in relation to direct harm to individuals, discrimination or as having no disproportionate impact on freedom of expression.

        There are key omissions to the draft Bill including government action in relation to internet companies and what it sees as harm, and democratic integrity.

        The Bill should provide for promoting authoritative information and sources as well as a requirement to include news content by Tier 1 internet companies so that internet users are exposed to news as part of a healthy society.

        The role of the public should be enhanced under the Online Safety Bill including citizen voice and participation.

        Media literacy should be strengthened from how it is presently set out in the draft Bill. as a key part of citizen-supporting methods of tackling the problems in our information environment.

 

About Full Fact

 

  1. Full Fact fights bad information. We’re a team of independent fact checkers, technologists, researchers, and policy specialists who find, expose and counter the harm it does.
  2. Bad information damages public debate, risks public health, and erodes public trust. So we tackle it in four ways. We check claims made by politicians, public institutions, in the media and online and we ask people to correct the record where possible to reduce the spread of specific claims. We campaign for systems changes to help make bad information rarer and less harmful, and we advocate for higher standards.
  3. Full Fact is a registered charity. We're funded by individual donations, charitable trusts, and by other funders. We receive funding from both Facebook and Google. Details of our funding can be found on our website.[1]

 

  1. Full Fact has long called for the government to take action in this area and the online misinformation that has come with the pandemic has made this need even more evident. We cannot go on relying on the internet companies to make decisions without independent scrutiny and transparency. We welcome that the process of pre-legislative scrutiny of the draft Online Safety Bill is finally going ahead, and that the DCMS Sub-Committee on Online Harms and Disinformation is undertaking an inquiry: good legislation and regulation could make a significant difference in tackling dangerous online misinformation.
  2. Full Fact’s expertise covers misinformation and public debate. Some areas of the Online Safety Bill including specific harms falls beyond our areas of expertise (e.g. illegal harms around protecting children and relating to terrorism) and this submission reflects this.
  3. Full Fact is a member of Ofcom’s Making Sense of Media Advisory Panel which brings together experts to debate and inform the development of Ofcom’s media literacy research and policy work.
  4. Full Fact is a member organisation of the Counter-Disinformation Policy Forum which is convened by The Department for Digital, Culture, Media & Sport (DCMS) and brings together stakeholders from internet platforms, civil society academia and government to limit the spread and harmful effects of misinformation and disinformation.
  5. This submission is in addition to a Full Fact written submission to the Joint Committee on the Draft Online Safety Bill also in September 2021.

How has the shifting focus between ‘online harms’ and ‘online safety’ influenced the development of the new regime and draft Bill?

 

  1. The effect of this change is to reduce the scope of the draft Bill, so that the problems of misinformation and disinformation will not be effectively addressed, despite the government saying that these “have the potential to cause significant harm to both individuals and society… to influence elections, stoke racial divisions and abuse, and incite violence and rioting.”[2]
  2. The government’s own RESIST counter-disinformation strategy states that: “When the information environment is deliberately confused this can: threaten public safety; fracture community cohesion; reduce trust in institutions and the media; undermine public acceptance of science’s role in informing policy development and implementation; damage our economic prosperity and our global influence; and undermine the integrity of government, the constitution and our democratic processes.”[3] The draft Bill’s narrow focus on harm to individuals will fail to address all of these harms to our society and our democracy. In fact, all the draft Bill presently does in this area is establish an advisory committee.
  3. The shift from Online Harms to Online Safety has gone alongside a shift from developing proportionate responses to clearly identified harms, to an approach with far less democratic oversight than we would expect, and far more emphasis on the control of content that ordinary internet users can see and share.
  4. The failure to establish an open, democratic, transparent process for tackling the harms caused by misinformation and disinformation leaves an oversight vacuum and invites overreach as well as risking over-moderation by platforms. The draft Bill’s proposal of an advisory committee is utterly inadequate.
  5. Meanwhile, it is clear that the government has come to think that its role includes identifying particular bits of legal content on the internet that should not remain online, and pressuring internet companies to remove them.
  6. This government enthusiasm for censorship-by-proxy has been a marked feature of its response to the pandemic. A government press release stated that “Up to 70 incidents a week, often false narratives containing multiple misleading claims, are being identified and resolved.”[4] It did not define ‘resolved’. A subsequent story briefed by the government to the BBC said: “The culture secretary is to order social media companies to be more aggressive in their response to conspiracy theories linking 5G networks to the coronavirus pandemic.”[5] Of course we accept that measures taken with good intentions during an emergency are never likely to be perfect. But instead of establishing open democratic transparent methods for responding to harmful false information in future, this draft Bill’s silence on misinformation and disinformation risks locking in censorship-by-proxy as the new normal (see Addressing censorship-by-proxy by direct action by government below) and relying on the good faith and judgement of staff currently in post, rather than limiting potential overreach by less scrupulous future decision-makers.

Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill/what should it be?

 

  1. We see three problems with the definition and process for determining what counts as “content that is harmful” set out in clauses 45 and 46.
  2. Underlying them all is the central mistake that the draft Bill makes of trying to define harm in the abstract and then hand off to Ofcom and internet companies the ‘details’ of how to identify and tackle it. What is needed instead is legislation that clearly identifies the specific harms it seeks to address and the remedies it wishes to require, as the draft Bill does try to do in respect of specific kinds of illegal content. That legislation would of course need to be regularly revisited by parliament. The idea of legislating for online safety once and for all is hubristic and ignores the constantly evolving nature of content, etiquette and design of networks online. It would be far better to accept that this is a new area of law that will need to gradually develop with ongoing parliamentary oversight. For context: the UK has passed an immigration act once every 2.5 years this century. The Communications Act is now 18 years old.
  3. The three problems we see with the definition and process are as follows.
  4. First, as with most of the draft Bill, the definition of harm gives too much power to Ministers. It is impossible to imagine legislation giving Ministers such sweeping powers to designate harmful content offline that, say, newspaper publishers were then required to restrict. The case has not been made that this power is necessary or proportionate. At least, this power deserves far more parliamentary scrutiny than the draft Bill allows for. While a time-limited power of this sort might have some value in responding to emergencies, we see no reason why these far reaching choices should be made by Ministers and set out in unamendable regulations with such limited opportunity for parliamentary debate.
  5. Secondly, the deliberately ambiguous threshold of “significant” harm can and will be interpreted very differently according to the interests of those involved. The word “significant” can be used to mean anything from “not trivial” to “having serious consequences”. Internet companies have every incentive to use the strongest interpretation so as to reduce the work they have to do to meet their obligations. At the very least, this ambiguity will hinder the regulator’s work and credibility by leaving them at constant risk of legal challenge for overstepping these blurry boundaries.
  6. Thirdly, the draft Bill’s definition of harm will make it harder to address the real social harms that the government itself says come from disinformation (“threaten public safety; fracture community cohesion; reduce trust in institutions and the media; undermine public acceptance of science’s role in informing policy development and implementation; damage our economic prosperity and our global influence; and undermine the integrity of government, the constitution and our democratic processes.”). It will either require contorted and convoluted attempts to explain how, for example, election interference harms individual people psychologically, or simply leave these harms unaddressed. It is notable that the only part of the draft Bill that in any way responds to the government’s own concerns on disinformation, Section 112 (“Secretary of State directions in special circumstances”), does not use the definition of harm used elsewhere but instead refers broadly to threats to “the health and safety of the public, or to national security”. Perhaps this shows that the focus on harms to individuals is too narrow to address the harms the government itself is worried about.

Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?

Independent scrutiny of the algorithms of in-scope internet companies

  1. No. In fact it will encourage them to use systems that will have serious damaging unintended consequences which will be largely hidden from scrutiny. Content moderation at internet scale has to be done by machines. Contrary to the claims of the internet companies, they cannot do this accurately. When we do get glimpses behind the curtain, the reality is shocking.
  2. We have seen internet companies deploying algorithms that filter out posts referring to the place “Plymouth Hoe”;[6] take down posts from a police force warning about Covid-19 scams;[7] and even ban the internet company’s own page after mistaking it for an Australian news site.[8]
  3. Perversely, in passing this draft Bill parliament would be requiring internet companies to rely even more heavily on technology that we already know cannot do the job. Worse than that, it doesn’t provide for meaningful scrutiny of this technology or how it is used. We expect the result to be significant infringement of individuals’ freedom of expression not by design but by accident.
  4. Content moderation algorithms can do real good if they work well, and if they malfunction they can cause real harm. In this they are like many other safety-critical technologies. However, unlike many safety-critical technologies, the safety consequences of deploying a certain content moderation algorithm are not always obvious. How safe an aeroplane is is ultimately visible for all to see, despite all the expertise that goes into its engineering. A qualified person can and must test whether an electrical system is safe. The effects of content moderation algorithms are far harder to understand, but their design is subject to no external scrutiny at all. It is no coincidence that some algorithms deployed by internet companies to their billions of users have been shown to directly discriminate on the grounds of race.[9]
  5. The Online Safety Bill should seek to address these unintended effects by requiring independent testing of safety-critical content moderation algorithms (in many ways, just as other safety-critical industries require independent inspections). This would have challenges to address to be put in place and to institutionalise under the auspices of Ofcom as regulator, keeping in mind factors such as not to create barriers to entry to the industry, or making it excessively costly in operation or to fix problems identified.
  6. Ultimately, though, unless there is a some sort of independent process to verify algorithms and certify them in some way as being low risk in relation to direct harm to individuals, discrimination or as having no disproportionate impact on freedom of expression, then these harms will be perpetuated in this way.

What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?

 

  1. There are a number of key omissions to the draft Bill including government action in relation to internet companies and what it sees as harm, and democratic integrity. There are also a number of areas where there may be further gaps due to the Online Safety Bill leaving a lot of how the new regulatory regime will work to secondary legislation or to Ofcom to develop in the future. Whilst this is understandable in some ways, much of the details of how the legislation will work in practice should be made clearer before the Bill is passed.
  2. While practical in terms of flexibility to amend in changing contexts, if some of the detail is not provided when the Bill measures are being debated, it will make it hard to judge how well the intended regime is likely to work in practice. To make the Bill as effective as possible DCMS will need to set out far more details in draft form at the earliest opportunity and before opportunities to amend the draft Bill close.

Democratic harns and election integrity

 

  1. On democratic integrity, the Government appears resistant to tackling societal or collective harms without a definitive clear line to direct individual harm. Yet the government is aware such threats and risks are real. The Government has indicated  that foreign state disinformation campaigns during UK elections will be out of scope of the Online Safety Bill.
  2. We would urge revisiting this area in relation to elections, and not just its relationship with other harms on individuals. We do not believe it is right that the national security and other implications of disinformation campaigns during UK elections are out of the scope of the Bill. Indeed, the Prime Minister assured Parliament earlier this year that the online safety Bill that will be presented to the House this year will contain sufficient powers to tackle collective online harms, including threats to our democracy[10].
  3. The Bill should be strengthened and amended to improve democracy and address harms to democracy, including protecting against harmful misinformation in elections.

The need for a UK Critical Election Incident Public Protocol

 

  1. There may come a time during an election when the public needs to be warned about a specific threat identified by the security services, but at the moment the decision would be up to the government of the day, which would be put in a difficult position and is likely to be seen as conflicted.
  2. In Canada, this problem has been solved by setting out a public protocol—the Critical Election Incident Public Protocol (CEIPP)—for handling such situations and to depoliticise a key area where a general election may be vulnerable to interference and requires a solution to protect and defend electoral systems and processes.
  3. The UK Government should develop and publish a protocol for alerting the public to incidents or campaigns that threaten the UK’s ability to have a free and fair election that is independent of elected politicians. Ideally, the Elections Bill should include a provision requiring such a protocol to be agreed. It is important that The Elections Bill does, as the government has stated, work alongside measures in the Online Safety Bill and Counter-State Threats Bill ‘to protect our globally respected UK democracy from evolving threats’. Coherence is required across different regulations and associated practices, including on known and foreseeable risks.
  4. The draft Online Safety Bill contains a provision (under Clause 112 Secretary of State directions in special circumstances) ​enabling the Secretary of State to give Ofcom directions when they consider there is a threat to the health or safety of the public, or to national security. This clause, which requires significant attention in parliamentary scrutiny and wider debate on the Online Safety Bill, does not explicitly mention elections.   
  5. The present draft Online Safety Bill clause 112 is largely focused on directing Ofcom to prioritise action to respond to such a specific threat through its media literacy functions and requiring certain internet companies to publicly report on what they are doing to respond to such a threat.
  6. Given the Online Safety Bill is about Ofcom and regulation to prevent harm emerging from internet companies’ platforms in its scope, if democratic harms were folded into the Bill, as we believe they should be, this could then interlock with other regulators and actors, to address harmful misinformation and disinformation or other incidents threatening free and fair elections. A sensible precautionary provision is required to ensure that the public can be informed through a predictable and trusted process of any threats which can be effectively mitigated through public information.
  7. The reality at the moment is that the decision about whether to warn the UK public of a threat to our elections is as likely to be taken in California as Westminster. An election is possible at any moment. If conducted under current rules or indeed, as the present Elections Bill and draft Online Safety Bill envisages, it will be vulnerable to a serious incident with no protocol in place.

Addressing censorship-by-proxy by direct action by government

 

  1. The government monitors the internet and sends internet companies examples of content it believes is legal or violates their terms of service. The government can and does seek to limit speech online by lobbying internet companies to change their terms of service and then reporting content for violating those terms. This censorship by proxy is how the government responded to 5G misinformation, for example, when it should have responded earlier by publishing high quality public health information disproving certain false claims about 5G.
  2. The draft Online Safety Bill does not reflect this reality and it should or the regulation of in-scope companies, the role of Ofcom and the integrity of the system could be undercut undemocratically and without proper oversight. 
  3. The Online Safety Bill should include some form of reporting requirement for the government to publish details of all efforts it makes to influence internet company decisions about specific items of content, specified accounts or their terms of service. This could recognise the necessity for a limited time delay in the case of content considered national security sensitive, perhaps subject to review by the Joint Committee on Human Rights (JCHR) / Intelligence and Security Committee of Parliament (ISC).
  4. The Online Safety Bill should prohibit any attempt by the government to influence in-scope companies except within specified grounds and mechanisms.
  5. We know that the government has in the past year played a role in encouraging the internet companies to take action on the most harmful misinformation in relation to Covid-19. While the government has an important convening role on tackling harmful misinformation, this process must be transparent and open to scrutiny.
  6. We need to move beyond the present situation where a minister can summon internet companies and call on them to remove certain content from the internet, with no parliamentary process for overseeing that in a democratic way. That is a dangerous oversight that is the responsibility of Parliament to correct in the Online Safety Bill.

Promoting authoritative information and sources

 

  1. Addressing online harms effectively is often not just about addressing bad information directly, it is also predicated on the supply and dissemination of good information. Despite efforts from some companies such as YouTube to promote ‘authoritative news’, high-quality news content and robust public information still struggles to get anywhere near the same level of traction as misinformation, as multiple studies have shown.[11] However, during the pandemic internet companies set themselves a high bar for promoting authoritative information, with most pointing users towards WHO and public health sources in well-designed news feed panels and redirecting users to robust sources in search results. The pandemic has shown that identifying authoritative news and information sources can be relatively straightforward, but promotion of robust, non-partisan information is not the norm outside of pandemic-related public health (and some countries’ elections).
  2. Parliament has previously recognised the need for news as part of a healthy society—it’s a required part of radio and television output, for example. As the relative share of attention in legacy media declines, and as audiences fragment, we recognise the erosion of the shared reality that comes from shared access to news. That has consequences for our democracy and society more generally. We believe that parliament could consider whether a similar requirement to include news content should now be applied to Tier 1 internet companies so that internet users are exposed to news in a similar way that broadcast audiences are. It is far better to pre-empt problems of misinformation by making good information readily available than to respond later with measures that restrict freedom of expression, and this may be one way of shifting the balance towards proportionate measures.

Are there any contested inclusions, tensions or contradictions in the draft Bill that need to be more carefully considered before the final Bill is put to Parliament?

 

  1. Well-meaning but poorly drafted measures could prove detrimental to individuals and society for years to come. Full Fact believes there are a number of such areas that should be very carefully considered before the final Bill is put to Parliament. This includes provisions relating to democratic speech, journalistic content, Secretary of State powers, transparency requirements and enhancing the role of the public in the success of the new regime. 

Democratic Speech

 

  1. The definition of “content of democratic importance” is not sufficiently clear in relation to what ‘is or appears to be specifically intended to contribute to democratic political debate’ and this has the potential for serious and unintended consequences. Whilst the explanatory notes offer two kinds of examples this raises questions as to what would or would not be such content beyond this.  The definition of “a live political issue” is also not clear and neither is it clear who determines that definition.

Journalistic Content

 

  1. We declare an interest in that Full Fact is a recognised news publisher within the meaning of the draft Bill.
  2. There are two main problems with the government’s proposed exemption for news publishers and journalists.
  3. First, the underlying principle is flawed. If the draft Bill really requires special protections to make journalism possible under its rules, then its restrictions on ordinary internet users go too far. Journalists should not need or have privileged freedom of expression compared to their audiences (as distinct from privileged access to information or protection from interference with newsgathering, which they do sometimes need and the law provides for).
  4. Secondly, it won’t work on its own terms and accidentally creates a loophole so big that some of the most reckless and harmful deliberate disinformation campaigns could sail through it. The criteria for a news publisher being a recognised news publisher essentially boils down to setting a standards code, handling complaints, and having an identifiable publisher. It would be trivial for a malicious disinformation campaign to comply with those requirements. During the pandemic at Full Fact we have seen the harm that can be caused by sites purporting to be legitimate news and these sham sites cannot be excluded from the proposed regime.

Secretary of State Powers

 

  1. Full Fact believes that the powers envisaged in the draft Bill for the Secretary of State require close examination as they are too extensive and need to be narrowed in a number of areas.
  2. For example, the Secretary of State’s power to direct OFCOM to modify its codes of practice to bring them in line with ‘government policy’ (Clause 33(1)(a)) should be dropped. Ofcom’s independence as regulator should not be undermined by any provision in the Bill.
  3. Whilst many powers for a Secretary of State might be expected in an Act such as this changes are warranted to ensure separation of roles and that the role of parliament be strengthened increasing its oversight and wider role, including in relation to how the framework and regime develops into the future.  
  4. The Bill should allow the regulator the space to take decisions based on the available evidence. A case in point in this regard is the areas referred to in Clause 12, Ofcom should have enough powers to address threats to public safety, public security and national security.

Transparency requirements

 

  1. None of the internet companies are sufficiently transparent on the action they have taken to prevent misinformation online, including Covid-19 misinformation. Because internet companies can silently and secretly shape public debate, we need transparency requirements to be clear on the face of the Bill, so we can better understand what choices in-scope companies are making and why.
  2. What is needed is real time information on suspected misinformation from the internet companies; independent scrutiny of the use of AI by these companies and its unintended consequences (see ‘Independent scrutiny of the algorithms of in-scope internet companies’), and, real time information on the content moderation actions taken by companies and their effects. It is not practical to only ask for information annually, given the speed at which misinformation evolves and the speed at which information crises occur. If there is only a snapshot of the problem months later, then the harm will already have occurred: retrospective reports do not provide timely information when it is needed by stakeholders to address online harms
  3. We also argue above (under censorship-by-proxy) that transparency is needed on the relationship between government and internet companies when it comes to content moderation.
  4. We live in an age of real time information and a credible regulator must have the ability to direct those it regulates to provide information at specified times, in specified forms, and subject to specified verification.
  5. Full Fact therefore welcomes that the draft Bill does give Ofcom powers to obtain real-time information from internet companies which is vital for future rules to work. Relying largely on an annual transparency report from tech companies would be inadequate .
  6. Simply providing information to Ofcom privately is not enough. As below, under ‘Enhancing the role of the public’, it is insufficiently clear what will be published and made available to whom beyond what might be made available to Ofcom and there should be provision to maximise the public transparency of information so that it is available to all, including citizens, organisations and other actors working to reduce harms.  
  7. The ongoing controversies concerning academic access to information about what is happening on internet platforms also underlines how much further action is required.[12] 
  8. Clause 101 (“OFCOM’s report about researchers’ access to information”) fails to resolve this concern. It will simply and predictably report on how the internet companies’ selective releases of information distort debate and policy in this area. The Bill needs to compel public transparency not just report in two years on the lack of it.

Enhancing the role of the public under the Online Safety Bill and new regime

 

  1. Full Fact believes the draft Online Safety Bill and resulting regime would be more effective if the legislation and associated arrangements strengthened the role of the public in various ways.
  2. The process throughout must be inclusive and diverse with citizens and civil society participation both as the legislation is developed and in the resulting regulatory regime.
  3. Significant and continuous debate in Parliament and beyond with the public is needed on these measures given the potential wide-ranging implications and to ensure the resulting regime reflects the concerns of the public, for example in relation to harmful misinformation (see below).
  4. As above, under Transparency requirements, there should be provision to ensure public transparency to citizens by default. Whilst Ofcom should have the legal powers to access information it needs, the draft Bill is not clear on the extent of accountability to the public when it comes to information required.

More and better media literacy than set out in the draft Bill

   

  1. Given that the Online Safety Bill is in purpose and scope a law about how the regulator regulates internet companies and also how the regulator goes about media literacy, the new regime could be a huge opportunity for transforming media literacy in the UK in the digital era. However, the ambition is presently not sufficiently clear.
  2. The draft Online Safety Bill gives Ofcom powers on requiring transparency from service providers on what they are doing to improve the media literacy of their users and how they are evaluating the effectiveness of such action, and Ofcom can provide guidance on those efforts. Some additional transparency on what platforms are doing and this guidance do not amount to a massive step change and could mean just more of the same as now: platforms reporting a lot of activity without much evidence that such efforts are solutions commensurate with the problems.
  3. The draft Online Safety Bill requires that Ofcom ‘carry out, commission or encourage educational initiatives designed to improve the media literacy of members of the public’. Again, either through its own action, or what it leverages and inspires from others, will what Ofcom does be of a scale needed, will it play its full part? The pandemic has reminded us again that providing good information proactively is the best way to limit the damage bad information can do. With audiences fragmenting, our public bodies need to gear up to do this at a much larger scale than in the past.
  4. Ofcom does very good work on media literacy. Yet it is limited in impact: its research on the state of play and what works is very useful, but Ofcom’s direct and indirect action needs to translate into accelerated progress—real world difference—and that means resources and initiatives must add up to enough to ‘move the dial’.
  5. Ofcom’s media literacy activity is presently focused on generating an evidence base of UK adults’ and children’s understanding and use of electronic media and sharing that evidence base within the regulator itself and with external stakeholders. It needs to be clearer to what extent Ofcom’s research into people’s media literacy needs plays a role in shaping public policy, or provides other organisations and agencies with evidence that informs what they do in their initiatives. That would also make it easier to envisage what will or needs to be different in the new regime of the eventual Online Safety Act. 
  6. Ofcom research must go beyond the state of play to more of what works and setting out a concrete action agenda that emerges from the evidence on interventions. For example, if Ofcom’s research released in April this year tells us 24% of UK adults did not consider the potential trustworthiness of online information at all this is important to know. But what is more important is , for example, what the internet platforms it will soon regulate could do to change that alongside action by others.
  7. There are many areas in the media literacy provisions requiring scrutiny, deliberation and changes on the face of the Bill. Overall, to ‘Promote’ and ‘Improve’ media literacy do not necessarily give us a clear notion of the outcomes being sought. What level of improvement and on what key measures are unclear. Without clear ambition from government and authorities such as the regulator it is not clear what the shared project is. Taking the same Ofcom research figure above, that 24% of UK adults did not consider the potential trustworthiness of online information, what might be the target ambition? Should we not be aiming for that figure to be practically all—something around 3%? By when? And what is the collective assessment of what it might take to help people to do that?
  8. It is difficult to envisage internet platforms playing a full part in media literacy without some shared sense of destination at a national level: of what media literacy levels are being worked towards. The Online Safety Bill and the annual plans of the Online Media Literacy Strategy could change that. At present there is very little substance on what Ofcom’s role will be in practice on preparing guidance ‘about the evaluation of educational initiatives’; ‘about the evaluation, by providers of regulated services, of any actions taken by them in relation to those services to improve the media literacy of members of the public’; ‘about the evaluation, by persons [i.e. the platforms] developing or using technologies and systems’; and ‘of the effectiveness of those technologies and systems in improving the media literacy of members of the public.’ 
  9. This Bill in its present draft is not ambitious enough on media literacy in the digital era. Ofcom could have low ambition or lack leverage to improve this nation’s media, digital and misinformation literacy. Whilst much of this will not need to be in legislation some changes are warranted to make sure real progress is made over time. 
  10. Strengthening media literacy in this Bill would be part of the freedom of expression, citizen-supporting methods of tackling the problems in our information environment: whether navigating the everyday misinformation that comes with a democracy or more harmful misinformation.

Citizen voice and participation

  1. Citizens should have an ongoing voice in shaping the new regime so that it can be effective. A number of civil society groups are increasingly engaging with their supporters affected and/or concerned around harms online and it is encouraging that some parliamentarians and committees are also reaching out directly to the public and those that represent many groups impacted. There is more to do in this regard as the legislation is developed. The complexity of the draft Online Safety Bill makes that a very real challenge.
  2. There is increasing evidence that the public overwhelmingly wants a better online environment. The public see the internet companies as bearing most blame for the spread of false or misleading information and want both those companies and politicians to take responsibility in acting to address it.
  3. UK parliamentarians now have the opportunity with the Online Safety Bill to fulfil this expectation and hope that harmful misinformation will be addressed, and to make sure any approach to tackling misinformation is risk-based and proportionate.
  4. In doing so, the Government and parliament should recognise that the voice and participation of the public is critical to success. Avenues to realise this should be uplifted in dialogue on what is in the Bill, how the regime evolves and the necessary accountabilities that need to flow to people as users of services and as citizens,             
  5. One example would be around where the draft Bill sees Ofcom setting up an expert advisory group on mis/disinformation. Whilst thoroughly inadequate in itself to address the harms that can result from misinformation, this could be a useful body, especially if it has strong representation from civil society groups and it is positive that representatives of UK users of regulated services as well as experts will be included. This is one area where it could be envisaged to strengthen the provisions so that there is a wider forum to ensure citizens affected by harmful misinformation are active participants in the system and how it operates.  
  6. On one dimension, freedom of expression, ordinary citizens should not have to worry about their own voice online as a result of the resulting law and regulation. This and other important principles need to come to the fore. 
  7. If the draft Online Safety Bill can be changed for the better, as it absolutely must in the months ahead, the prospect of a better online environment for the people of the UK can become more of a reality with all the benefits that can follow from that from improved public debate to a healthier society and democracy.

 

September 2021

 


[1] https://fullfact.org/about/funding/

[2] Online Media Literacy Strategy, July 2021 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004233/DCMS_Media_Literacy_Report_Roll_Out_Accessible_PDF.pdf

[3] RESIST Counter Disinformation Toolkit, 2019 https://gcs.civilservice.gov.uk/publications/resist-counter-disinformation-toolkit/ 

[4]  Government cracks down on spread of false coronavirus information online, Mar 2020 https://www.gov.uk/government/news/government-cracks-down-on-spread-of-false-coronavirus-information-online 

[5] ‘Coronavirus: Tech firms summoned over 'crackpot' 5G conspiracies’ BBC News, Apr 2020 https://www.bbc.co.uk/news/technology-52172570

[6] https://www.itv.com/news/westcountry/2021-01-27/facebook-removes-posts-featuring-plymouth-hoe-for-being-offensive

[7] We are aware of this directly from the police force concerned.

[8] Kevin Nguyen of ABC News compiled a list of many pages blocked when Facebook attempted to filter out news content in Australia. Along with Facebook’s own page, it included a state government, a domestic charity, and a Fire Department https://twitter.com/cog_ink/status/1362173998696566790

[9] https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

[10] Integrated Review debate, House of Commons, 16 March 2021 https://hansard.parliament.uk/commons/2021-03-16/debates/52D67D49-A516-4598-AC69-68E8938731D9/IntegratedReview#contribution-76EFB229-E887-4B38-BBBD-09B5E13FA760

[11] https://www.socialmediatoday.com/news/new-study-shows-that-misinformation-sees-significantly-more-engagement-than/555286/; https://www.wired.com/story/right-wing-fake-news-more-engagement-facebook/

[12] ​​https://www.theverge.com/2021/8/4/22609020/facebook-bans-academic-researchers-ad-transparency-misinformation-nyu-ad-observatory-plug-in