Guardian News & Media—written evidence (FEO0093)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

About Guardian News & Media

 

Guardian News & Media (GNM) is the publisher of theguardian.com and the Guardian and Observer newspapers, both of which have received global acclaim for investigations, including the Paradise Papers and Panama Papers, the 2018 Windrush scandal and Cambridge Analytica. GNM is part of Guardian Media Group (GMG), one of the UK's leading commercial media organisations and a British-owned, independent, news media business. As well as being the UK’s largest quality news brand, the Guardian has pioneered a highly distinctive, open approach to publishing on the web and it has achieved significant global audience growth over the past 20 years. Our endowment fund and portfolio of other holdings exist to support the Guardian’s journalism by providing financial returns.

 

Executive summary

 

        The internet has fundamentally changed the way we communicate in society today, impacting the way people express and interact online. Balancing free expression with the power and suggestive capability of interconnected global networks, is a source of tension in societies across the world.

 

        The legal framework that governs free speech online does not currently encourage consistency in the regulation of speech acts by online platforms, nor the terms on which key hosting and technology services are supplied to those platforms.

 

        The international legal basis for free expression does provide limited exceptions to that right. The UK Parliament has already legislated to create a number of criminal statutes that seek to balance and protect the right to free expression.

 

        We welcome the government’s online harms, but we have concerns about the same regulatory framework being applied to illegal and legal harms.

 

        The business models of digital platforms rely on the virality of their content, and algorithmic based features are designed to prioritise content with high engagement, rather than quality, accurate forms of information.

 

        As the government seeks to nullify the negative externalities caused by the content and activity facilitated by online platforms with SMS, it should focus first on implementing powers to enable regulators to introduce competition, accountability into the online advertising market.

 

        The government should: prioritise the publication of a white paper to finalise the detail required to create the digital markets unit in legislation; begin the process of designating online platforms with strategic market status, and; use the parliamentary time available to pass the legislation required to establish the Competition and Markets Authority’s (CMA) proposed digital markets unit, in parallel with legislation to regulate online harms.

 

        While those on the far right have attempted to weaponize the concept of free speech, evidence suggests that individuals from a BAME background, LGBTQ people and other minority groups, face the most significant harassment on social media platforms.

 

        We note that repeated studies by academics and media organisations have failed to identify any censoring of debate or 'liberal bias' within news feeds or search results.

 

        Where journalism from trusted news sources is distributed through search and social platforms, those platforms should not be put in the position of determining when a piece of journalism should be taken down, the independent online harms regulator should ensure that online platforms have processes in place to ensure that:

 

     Platform terms of use should clearly state a presumption that content distributed by news publishers - according to an agreed definition - should not be subject to obligations placed on online platforms as a result of online harms legislation.

 

     Where platforms are notified of a potential breach of terms of service in relation to journalism, or to user generated comments connected to a piece of journalism published by a “relevant news publisher”, there should be a process in place to notify that publication. This notification should accord with clear transparent timelines and processes and be subject to a special category of recording by the proposed independent regulator.

 

        The online harms framework should contain sufficient checks and balances on the face of the bill, including safeguards for the right to freedom of expression and freedom from undue political interference.

 

     Individuals should be able to appeal to an independent regulator where a decision to take down content, delete accounts, or curb online activity, is overzealous or wrong.

 

        It is vital that in seeking to make online platforms accountable for the content and activity that they host, the government does not use this process as a way to deploy an enlarged system of government surveillance on the lawful activity of citizens.

 

        We agree that the government has a role in ensuring that there is a co-ordinated media literacy strategy through a focus on the school syllabus and potential funding for providers resources and training.

 

 

Introduction

 

Freedom of expression, which includes comment, is essential to the maintenance of a free society. As the Council of Europe has noted, “Freedom of expression constitutes one of the essential foundations of [a democratic] society, one of the basic conditions for its progress and for the development of every man.”[1] The right to freedom of expression is not without limits, but the right to hold and express an opinion is critical to the progression of society. Subject to paragraph 2 of Article 10 [of the European Convention on Human Rights], freedom of speech is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’.” (Handyside v. the United Kingdom judgment of 7 December 1976, § 49).

 

In the context of freedom of expression of the news media, GNM has consistently opposed attempts to impose additional limits to freedom of expression on publishers. We submitted to the Leveson inquiry that “[s]ubject to existing and relatively clear constraints imposed by the criminal and civil law, attempts to interpose an additional layer of censorial regulation on such articles in an attempt to ensure that they stay within the bounds of ‘acceptable’ societal standards should be avoided. Similarly, imposing conditions of accuracy on opinion or comment is a very slippery slope and, moreover, an approach which is squarely at odds with legal authority.”

 

In scope of this Committee inquiry is the interplay between social media and freedom of expression. The Committee’s call for evidence for this inquiry[2] notes that the founders of Twitter and Facebook believe their products to be the modern equivalent of the ‘public square’. As the Committee notes, they are in fact private companies, operating on the basis of commercial ends. They have no commitment to the moral, spiritual or physical state of the commonwealth, only a fiduciary responsibility to maximise profit on behalf of shareholders.

 

Recent decisions by both companies[3][4] to take action to permanently suspend the account of the outgoing President of the United States, despite consistent breaches of their policies over recent years, should be seen within that context of commercial self-interest.[5] Or as Guardian columnist Marina Hyde noted, these actions were “not so much a case of shutting the stable door after the horse has bolted as doping the horse, whipping it into a frenzy, encouraging it to bolt, fostering a world in which humans are subjugated by horses, monetising every snort and whinny, [and] allowing the very existence of ‘humans’ and ‘horses’ to become just one of a bunch of competing opinions.”[6] As the academic Emily Bell has noted, ‘Social media companies’ mistake has been to assume that unregulated speech from powerful people is a hallmark of democracy rather than a threat to it. In 2015, the Trump campaign released a video calling for a “total and complete shut down of Muslims” entering the US. Although it was in clear violation of Facebook’s hate speech policies, chief executive Mark Zuckerberg decided to leave the video up, with the justification that the speech of a leading candidate was “newsworthy”.’[7]

 

There are cultural differences between the US, with its first amendment on the one hand and European and other societies, which have histories of moderating speech by trying to balance competing rights. In the US, the the lax regulatory climate over the last several decades, which facilitated the emergence of a few giant tech companies, that effectively became the hosts for much of the public sphere, is personified in section 230 of the Communications Decency Act (CDA), which effectively provides immunity for those hosting third party comment. It is now argued, on a bi-partisan basis in the US[8], that section 230 of the CDA, introduced with all the best of intentions in a free speech context, has allowed the public space to become so unregulated and hostile, that it may now be repealed or modified. That this debate is complex is illustrated by the fact that President Trump is amongst those now calling for its repeal[9].

 

In a democracy, there are important questions about who makes decisions about which speech is acceptable, and on what basis. The decisions by Twitter and Facebook to permanently suspend the account of the outgoing President of the United States (POTUS)[10], after he has lost power, combined with the decisions by Google, Apple and Amazon Web Services (AWS) to remove key retail access and key services from the prominent right-wing social media app Parler[11], after the app was implicated as a vector through which the events of January 6th were organised, raise more questions than they provide answers about decisions regarding free expression online. While there is understandable relief[12], that networks used to incite violence and hatred of fellow US citizens have been taken offline by those platforms, the moves to silence every user of the Parler app, could be argued to be imprecise and disproportionate. In terms of the consistency of decision making, for years, Facebook has ignored repeated breaches of platform terms of service during the time that the former POTUS was in office. Facebook finally moved to silence the former POTUS, only when it became clear that his democratic mandate had gone, and that a new administration was incoming. Moving perhaps to demonstrate to the US Congress, that it can take responsibility for speech acts committed on its platform, ahead of what is likely to be a heated debate about the future of s230of the CDA, platform regulation, and legal liability.

 

When considering the role that the platforms play in safeguarding the right to free expression, it is important to note that a number of these platforms also have explicit commercial self-interest in keeping the new US administration onside. In addition to providing services to US consumers, these companies now vie with each other to win multi-billion dollar contracts with the US government.[13] [14] They also rely on the US government to lobby on their behalf with foreign governments, where foreign governments seek to develop and legislate tax laws[15] or licensing obligations[16], that seek to bring the behaviour and practices of online platforms in line with established local norms.

 

While both Twitter[17] and Facebook[18] did, albeit belatedly, seek to enforce their terms of service against the account of the POTUS, the move by Amazon Web Services (AWS) to remove services from Parler appears to have come without warning. In legal correspondence submitted by Parler as part of a lawsuit challenging AWS’s decision to terminate its services, Parler alleges that during “the final call between Parler and AWS before the latter pulled the plug, AWS officials told Parler officials that there was nothing Parler could do to get its service back.”[19] Parler states that the decision by AWS to cut services to Parler was based on the assertion that Parler was used to incite, organise, and coordinate the January 6th attack on the U.S Capitol.”[20] Yet Parler alleges that AWS provided no evidence to support this accusation, relying only on what Parler describes as “unsupported speculation from reporters.”[21] Given that reporting on the use of Parler to incite the January 6th attack was the basis for removing services, it is not clear why AWS did not also terminate services provided[22] to Facebook group companies[23]? The alleged abrupt, unevidenced and inconsistent nature of the process that underpinned the decision by AWS to remove service from Parler, suggests that this was a purely political decision, that had a disproportionate impact on the speech rights of users of Parler. It suggests too, that there is little consistency in the application of terms of service.

 

As Matt Stoller, fellow of the Open Markets Institute has written “what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence…. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.[24] The framework of laws that govern the operation of apps that host free speech acts, do not place sufficient responsibilities on platform executives to act against content that should arguably be deemed illegal. While at the same time, that same legislation enables platforms and technologies that are critical to the functioning and findability of those apps, to remove services from those apps without clear routes to redress.

 

These questions of the power of dominant online platforms, and the exercise of that power to silence elements of society really matter. For many people, dominant online platforms have become the internet, or at the very least the principal gateways to it. Data from UK regulator Ofcom[25] shows that half of adults in the UK now use social media to keep up with the latest news. Likewise, in the US, Facebook is the most common social source of news, with 52 percent of respondents getting news there[26]. According to the University of Canberra’s 2020 Digital News Report[27], 39% of Australians use Facebook for general news, and 49% use Facebook for news about Covid-19. This means that dominant online platforms now play a key role in our information ecosystem.

 

Yet it is also key to be aware that it is the interplay between the viral, networked functionality of social media platforms, combined with content published by the mass media, and skilful media manipulation by elected politicians, that is resulting in the descent of democracy to the scenes of domestic terrorism that we see playing out in the United States today. Findings of a recent paper by Yochai Benkler and others[28] suggest that while social media platforms are often the key vector through which speech that motivates illegal and hateful acts is recommended and consumed, the content itself often comes from mainstream sources.[29] This raises questions about the culpability of mainstream news sources themselves, about the content that they are prepared to publish, in pursuit of profit.[30] As James Murdoch noted recently, those “outlets that propagate lies to their audience have unleashed insidious and uncontrollable forces that will be with us for years.”[31]

 

It is important to note that, even the strongest historical proponents of concept of free speech, recognised that there were limits to that right. In On Liberty, the philosopher John Stuart Mill noted that “action cannot be as free as speech. He immediately provides the example of speech in front of an angry mob that could incite violence. Mill contends that such speech should not count as free speech but is action, and when harmful should be regulated.”[32] There are those on the right who would defend the substance of comments made by the former POTUS - literally in front of a mob - and the right for those comments to be promulgated across social media platforms. But academics have noted how claims of free speech by those on the far right are part of an attempt to weaponize the concept of free speech for their own political ends. Joan W. Scott, professor emerita at the Institute for Advanced Study, in Princeton, has noted, “These days, free speech is the mantra of the right, its weapon in the new culture war. The invocation of free speech has collapsed an important distinction between the First Amendment right of free speech that we all enjoy and the principle of academic freedom that refers to teachers and the knowledge they produce and convey. The right’s reference to free speech sweeps away the guarantees of academic freedom, dismissing as so many violations of the Constitution the thoughtful, critical articulation of ideas; the demonstration of proof based on rigorous examination of evidence; the distinction between true and false, between careful and sloppy work; the exercise of reasoned judgment. To the right, free speech means an entitlement to express one’s opinion, however unfounded, however ungrounded, and it extends to every venue, every institution.”[33]

 

It is clear that balancing free expression with the power and suggestive capability of interconnected global networks, which allow the largely unedited and uncontrolled broadcast of speech in real-time, is a source of tension in societies across the world. What has become starkly evident in recent months, is that this tension is as pronounced, and as potentially detrimental to the functioning of well-established democracies, as it is to nascent democracies or undemocratic states.

 

The legal basis for the protection of free expression

 

The legal basis for protecting free expression is provided in many international conventions. In addition to Article 10 of the European Convention on Human Rights, it is also included in Article 19 (1) of the International Covenant on Civil and Political Rights (ICCPR)[34] which states that the protection of the right to hold opinions should be without interference.[35] The Covenant sets out that “No person may be subject to the impairment of any rights under the Covenant on the basis of his or her actual, perceived or supposed opinions. All forms of opinion are protected, including opinions of a political, scientific, historic, moral or religious nature. It is incompatible with paragraph 1 to criminalize the holding of an opinion.”

 

But even the Covenant contains qualifications on that right, including on the basis of ‘respect for the rights or reputations of others’, and in order to protect “national security or of public order (ordre public), or of public health or morals.” The Covenant notes that any “such limitations must be understood in the light of universality of human rights and the principle of non-discrimination. Restrictions must be “necessary” for a legitimate purpose. Restrictions must not be overbroad.”

 

Commentary on the Covenant provided by the United Nations suggests that regulatory systems “should take into account the differences between the print and broadcast sectors and the internet, while also noting the manner in which various media converge… The penalization of a media outlet, publishers or journalist solely for being critical of the government or the political social system espoused by the government can never be considered to be a necessary restriction of freedom of expression.”

 

The Covenant notes that any restrictions on “websites, blogs or any other internet-based, electronic or other such information dissemination system, including systems to support such communication, such as internet service providers or search engines, are only permissible to the extent that they are compatible with paragraph 3… Permissible restrictions generally should be content-specific; generic bans on the operation of certain sites and systems are not compatible with paragraph 3.”

 

In addition, the covenant notes that “it would be impermissible for any such laws to discriminate in favour of or against one or certain religions or belief systems, or their adherents over another, or religious believers over non-believers. Nor would it be permissible for such prohibitions to be used to prevent or punish criticism of religious leaders or commentary on religious doctrine and tenets of faith.”

 

“Laws that penalize the expression of opinions about historical facts are incompatible with the obligations that the Covenant imposes on States parties in relation to the respect for freedom of opinion and expression. The Covenant does not permit general prohibition of expressions of an erroneous opinion or an incorrect interpretation of past events. Restrictions on the right of freedom of opinion should never be imposed and, with regard to freedom of expression, they should not go beyond what is permitted in paragraph 3 or required under article 20.”[36]

 

Article 20 of the covenant further attempts to balance freedom of expression, stating that “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

 

UK laws against hate speech

 

In relation to the parameters of free speech in the UK, unlike the US, UK citizens do not have a first amendment-style right to free speech. Indeed, the UK Parliament has already legislated to create a number of criminal statutes that seek to balance and protect the Article 20 rights referred to above. The Crown Prosecution Service (CPS) also has guidelines in place in relation to racist and religious hate crime[37], homophobic, biphobic and transphobic hate crime[38], and hate crime perpetrated against disabled people[39]

 

The human rights organisation Liberty, provides a useful summary of speech offences in the UK[40]., noting that these “offences can be counterproductive (giving extreme groups publicity in the event of any trial), can have a chilling effect on legitimate debate and peaceful protest and have been extended in an ad hoc and piecemeal way. There have been very few prosecutions under these offences, and there is therefore an urgent need to review their efficacy and impact.”[41]

 

In recent years, there have been an increasing number of prosecutions of people commenting on social media platforms under these provisions. Many concerns have been raised on the appropriateness and proportionality of such prosecutions, resulting in the publication of the CPS code of practice.[42]

 

However, the exponential increase in the use of social media in recent years and the violence of the threats that have been directed, particularly at women such as Andrea Jenkyns[43], Stella Creasy[44] and campaigner Caroline Criado Perez[45] has reaffirmed that some sort of criminal sanction is necessary for such cases. The difficulties involved in balancing these various rights, freedoms and responsibilities are also recognised by the CPS’ publication of another code of practice.[46]

 

In addition to a range of legal protections, the UK press has also, via the IPSO editor’s code of practice and alternate industry codes, added a further level of self regulation in this area[47].

 

What distinguishes news publishers from online platforms is that fact that news media publishers have primary legal liability for the content they publish. When combined with the creation of cross-industry codes of practice, and the transparent placement of advertising within news environments - both in print and online editions - they create a framework of norms and standards that place multiple pressures on news publishers to publish material that accords to professional standards. Online platforms with SMS are under no such legal, self-regulatory or commercial pressures.

 

Tech platform regulation

 

Global tech platforms do not have transparent editorial codes, nor do they have established cultural or public interest norms which their users understand. Legally they are exempt from most primary publishing liabilities thanks to exemptions in the EU eCommerce Directive and in the US under s230 of the Communications Decency Act (CDA). The absence of legal liability for speech acts on these platforms has undoubtedly contributed to the rapid user and advertising growth these platforms have experienced.

 

The business model of social media platforms is to create engaged user bases through which personal data is processed in order to create intelligence, which is then used to target advertising to citizens on any device, at any time of the day, wherever they are engaged, whether on or off that social media platform. This activity enables social media platforms to merge multiple data sets to enable perfect sight of user data, and facilitates expansion into new markets (whether in terms of geography or business model) with relative ease.[48] It is not a business model that relies on the publication of high quality content, rather it relies on the continued ability of those platforms to capture and retain the attention of the user and the network of friends and associations with whom the user communicates on a regular basis.

 

The absence of primary legal liability means that content regulation on social media platforms has generally been conducted only according to ad hoc community guidelines[49], often enforced by contractors working in poor conditions[50]. Facebook’s moderation guidelines were private, until they were revealed by the Guardian in 2018.[51] The only legal liability that seems to limit their activities in any way is unlawful hate speech[52] [53], and even that liability is fairly restricted.[54][55]

 

Over recent years it has become clear, largely through journalistic reporting rather than regulatory action, that the executives who own and manage the platforms, are themselves either confused about their legal and ethical role under the current regulatory framework[56], or are prepared only to enact and enforce those policies where it is politically (and by extension, commercially) advantageous for them to do so. These shortcomings were recently highlighted in internal Facebook messages, posted during key run-off elections to represent Georgia in the US Senate, which appeared to show that senior Facebook executives had no idea whether political advertising that contained misinformation should be restricted or allowed to propagate to millions of voters through its systems.[57]

 

Online platforms appear unclear whether existing regulation allows them to intervene to remove content, or whether by intervening, this automatically means that they take on editorial responsibilities. It is clear that, in the wake of a series of scandals about the consequences of their inaction, senior executives at some of the biggest online platforms have very recently discovered an appetite for new laws that clarify their role and responsibilities online.[58][59] It is clear too, that the threat of regulation is driving more proactive steps being taken by the platforms themselves.[60]

 

Such an ad hoc approach to content distributed on Facebook group company platforms is made possible by a business model that completely disconnects the advertiser from the content that their advertising appears beside on social media platforms. The current lack of legal obligations for platforms to pre-vet content prior to publication enables online platforms with strategic market status to publish such huge volumes of content, which in turn enables the creation of a huge volume of advertising inventory, which enables those platforms to rely on an almost limitless base of potential advertisers, large and small.

 

In response to a recent advertiser boycott of Facebook[61], due to the volume and nature of hateful speech acts on its platform, it is understood that the Facebook CEO told staff that they’re “not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue”.[62] On 18th December 2020, it was announced that Unilever, one of the world’s biggest advertisers, was to resume marketing across Facebook platforms.[63]

 

Speech regulation in the context of the medium through which it is delivered

 

As GNM noted in its submission to the Leveson inquiry, legal precedent has developed to foster differential levels of content depending on the medium through which that content is discovered. In R (ProLife Alliance) v BBC [2004], Lord Hoffman said that the ‘power of the medium is the reason why television and radio broadcasts have been required to conform to standards of taste and decency which, in the case of any other medium, would nowadays be thought to be an unwarranted restriction on freedom of expression. The enforcement of such standards is a familiar feature of the cultural life of this country. And this fact has given rise to public expectations. The Broadcasting Standards Commission puts the point with great clarity in paragraph 2 of its Code on Standards (Codes of Guidance, June 1998): ‘There is an implied contract between the viewer, the listener and the broadcaster about the terms of admission to the home. The most frequent reason for viewers or listeners finding a particular item offensive is that it flouts their expectation of that contract—expectations about what sort of material should be broadcast at a certain time of day, on a particular channel and within a certain type of programme, or indeed whether it should be broadcast at all.’[64]

 

Given the growing centrality of a small number of social media platforms to the lives of the vast majority of citizens, in delivering personalised services that are built to push content that satisfies the wants, needs and political views of the individual user, it is arguable that social media platforms have a more widespread, and much deeper relationship with users and their networks of friends and family, than any television broadcaster has ever had. It is certainly the object of Facebook’s marketing teams to suggest that the level of engagement that Facebook has with its users is akin to that of the medium of television[65]. That being the case, it appears logical that key aspects of regulation should flow across from the world of broadcast content to the social media context.

 

In the rest of this submission, we respond to specific questions posed in the Committee’s call for evidence.

 

  1. Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

According to the EU Human Rights Guidelines on Freedom of Expression Online and Offline[66], the global and open nature of the internet is providing citizens with new opportunities for exchanging information and opinions. The obligations of States under international human rights law, in particular the right to freedom of expression, the right to privacy and the protection of personal data, extend to the online sphere in the same way as they apply offline.

 

As noted above, the UK Parliament has legislated to create a number of criminal statutes that seek to balance and protect the Article 20 rights referred to above. The CPS also has guidelines in place in relation to racist and religious hate crime, homophobic, biphobic and transphobic hate crime, and hate crime perpetrated against disabled people. The creation of these guidelines reflects the fact that it is women[67], individuals from a BAME background, LGBTQ people and other minority groups - that are not well-off personalities and don’t have a newspaper column or talk show - that face the most significant harassment on social media platforms.[68] There are, therefore, existing laws in place to deal with the harassment of UK citizens online. There is, however, a valid question about whether the reporting and policing of crimes committed on social media platforms is sufficiently joined up and taken seriously[69].

 

It is in the context of a lack of enforcement of existing laws, that we are concerned about proposals in the recent online harms white paper, to shift the emphasis in how social media platforms are policed. The online harms white paper not only seeks to regulate harms that have been judged illegal by the UK Parliament, but seeks to regulate content that is legal, but deemed harmful by the government. The lack of clarity on obligations to regulate legal but harmful content, backed by potential fines and other sanctions if places fail to do so, places significant new legal responsibilities on online platforms. If executed with imprecision, there is a danger that these obligations could lead to the chilling of legitimate freedom of expression on these platforms.

 

As noted above, the UK Parliament has experience of passing laws that restrict speech, whilst also meeting long standing legal obligations in relation to protecting free expression. On that basis, we do not believe that the same regulatory framework should be applied to illegal and legal harms. Legal but harmful content includes harms that are subjective and vague such as ‘disinformation’ and ‘intimidation’. As outlined above, the introduction of action against subjective harms raises potential issues in relation to the right to free expression.

 

The white paper includes obligations on platforms within scope to combat the “evolving threat of disinformation and misinformation”[70]. Judgements as to which individual pieces of content represent disinformation or misinformation would require careful subjective judgement by a human being, in order to understand the detail and nuance of any speech act. The scale of sharing of user generated content across companies owned by one platform, Facebook, means that in the face of pressure to change their policy in relation to a range of harmful content and behaviour on their platforms, they will seek to use machine learning and algorithms to make what are highly subjective decisions.[71] Such machine driven decisions are unlikely to be able to make nuanced judgements in the way that a human editor can, and could lead to the overblocking of user content. This has the potential to lead to a significant chilling of free expression. If a harm is too vague to be defined by policymakers, it is not reasonable to impose an obligation on platforms to make decisions as to whether a piece of content or activity reaches an unclear threshold. This aspect of the proposed online harms legislation should be interrogated in detail as the bill moves through Parliament.

 

In terms of wider claims to suggest that freedom of expression under threat online, claims that right-wing points of view are being suppressed online has become a hallmark of the populist movement in the United States[72] and, to a lesser extent, the United Kingdom. As we noted in a recent submission to the Committee’s recently concluded inquiry into the ‘Future of Journalism’, our own desktop research shows repeated studies by academics and media organisations have failed to identify any censoring of debate or 'liberal bias' within news feeds or SERPs.[73]

 

Despite multiple reviews concluding that allegations that Facebook censors right wing news sources are bogus, such allegations persist[74]. The evidence is, rather, overwhelmingly clear in the opposite direction: third party data on news engagements on Facebook[75] consistently shows that right wing sources are the most read on the platform. In terms of search engines and search rankings, some UK news publishers have raised concerns suggesting their prominence has been subjectively downgraded within Google search on the basis of a right wing political outlook.

 

In our previous submission to the Committee’s inquiry into the ‘Future of Journalism’[76], we explained that Google ranks websites according to a range of objective technical criteria, rather than content. We noted too, that the Daily Mail, which has raised concerns with the Committee about supposed subjective bias, performs poorly across those technical criteria. The technical tools that Google uses to analyse and rate the performance of dailymail.co.uk are publicly available to staff at the Daily Mail via any Chrome browser[77]. The metrics that Google uses to rate the performance of a website are within the control of the Daily Mail, and could be remedied by the Daily Mail if this was deemed an area of improvement in which they chose to invest. The fact that the Daily Mail site rates poorly on key Google audit metrics, compared to other news media publishers operating in the UK market, is likely to have an impact on its appearance in SERPs.

 

It is more likely therefore, that the differential prominence of news sources on different platforms is less the result of supposed political bias, and more likely the result of the distribution strategy pursued by those publications. A strategy designed to ensure prominence on social media is very different to a distribution strategy that seeks to drive prominence and engagement through SERPs. There is no one size fits all approach to the distribution of journalism online, no one model that can deliver equivalent success on all search and social platforms.

 

 

  1. How should good digital citizenship be promoted? How can education help?

 

GNM agrees with the Committee that “media literacy is crucial. It means more than identifying ‘fake news’; it is about understanding journalistic processes and their value, how news is presented online and how it is funded”.[78] We also agree with both the DCMS select committee report on fake news, and the European commission’s high-level group on fake news that an increase in news literacy has the potential to be good for decision making in society, and positive for commercial organisations that create high-quality journalism.

 

We note the recommendation of this Committee’s recent report on ‘The future of journalism that “Government’s upcoming media literacy strategy should include coordination between the Department for Education, Ofsted and Ofcom on how to better integrate critical thinking and media literacy into the school curriculum.”[79] The government has a role in ensuring that there is a co-ordinated media literacy strategy through a focus on the school syllabus and potential funding for providers resources and training.

 

Through the Scott Trust, the independent charitable Guardian Foundation empowers children and young people to engage with the news. The charity envisions a world in which all people can tell their stories, access the truth and hold power to account. In the last 12 months, the Guardian Foundation’s award-winning free news literacy project, in partnership with National Literacy Trust and PSHE Association, NewsWise, educated 2,563 children aged seven to 11 in 47 primary schools in disadvantaged areas across the UK. After taking part in the programme, twice as many pupils were able to tell whether a news story was real or fake (from 32.7% to 67.2%) and pupils were more than twice as likely to feel able to tell if a news source was trustworthy (33.3% to 82.8%).

 

In response to school closures as a result of coronavirus, NewsWise created an onlinefamily zone with activities, links, tips and advice to help families learn more about the news together, a series of teacher training webinars and the Happy News Project to help primary pupils transition back to school with a focus on wellbeing, with uplifting stories, teamwork, speaking and news writing. The programme was also recognised in Nesta’s 2019 “Democracy Pioneers” awards, winning a £10,000 prize for work to improve people’s understanding and experience of democracy in the UK.

 

Until the pandemic hit, the Guardian Education Centre, which ran inspirational in-person news media workshops designed for schools, universities, teachers and families. During the pandemic, the Guardian Foundation transitioned its workshops to digital and delivered virtual learning to almost 300 people during the academic year, providing resources, activities and ideas to teach and engage young people with news and journalism.

 

It is also essential that judges are fully cognisant of social media and how such platforms operate, to ensure that this knowledge can be used to improve the quality of judicial decision making.

 

  1. Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

As stated in GMG’s response to the online harms white paper[80], it is possible for the UK Parliament to pass laws that restrict speech, whilst also respecting obligations in relation to protecting free expression. On that basis, we do not believe that the same regulatory framework should be applied to illegal and legal harms. Legal but harmful content includes harms that are subjective and vague such as ‘disinformation’ and ‘intimidation’. As outlined above, the introduction of action against subjective harms raises potential issues in relation to the right to free expression. While it is welcome that the government has been clear that news media organisations will not be subject to online harms legislation, there is a danger that the inclusion of vague terms could impact on reputable news sources, other organisations and individuals, depending on how they are applied and interpreted by online platforms in scope of the legislation. The detail of these proposals will require careful scrutiny through the Parliamentary process.

 

Ultimately, if the government and Parliament believes that an activity is sufficiently harmful to criminalise that activity, it should do so through primary legislation. If a harm is too vague to be defined by policymakers, it is not reasonable to impose an obligation on platforms to make decisions as to whether a piece of content or activity reaches an unclear threshold.

 

  1. Should online platforms be under a legal duty to protect freedom of expression?

 

In relation to the dissemination of journalism via online platforms, GNM welcomed Ministerial commitments to exclude news publishers from the scope of online harms legislation where they are already governed by successful self-regulatory procedures. The government reaffirmed this commitment in December 2020, in their response to the online harms white paper, setting out that content and “articles produced and published by news websites on their own sites, and below-the-line comments published on these sites, will not be in scope of legislation.” In addition, we were pleased that the legislation will also “include robust protections for journalistic content shared on in-scope services.”

 

Where journalism is distributed through search and social platforms, news publishers are keen to ensure that the online harms framework does not result in search and social platforms becoming de facto regulators of the news industry. It would be a retrograde step if search and social platforms were put in the position of determining when a piece of journalism should be taken down, which would severely restrict UK citizens’ access to news and information. GNM believes that to prevent search and social platforms being put in this position, the independent online harms regulator should ensure that online platforms have processes in place so that:

 

        Platform terms of use should clearly state a presumption that content distributed by news publishers, - according to an agreed definition - should not be subject to obligations placed on online platforms by online harms legislation.

 

        Where platforms are notified of a potential breach of terms of service in relation to journalism, or to user generated comments connected to a piece of journalism published by a “relevant news publisher”, there should be a process in place to notify that publication. This notification should accord with clear transparent timelines and processes and be subject to a special category of recording by the proposed independent regulator.

 

GNM has recent experience of environmental coverage being targeted by fact checkers for removal from Facebook, based on claims that the article did not provide certain context[81] to the specific points made in the article. In reality, the level of context that the complainant in question was requesting in the article, is well beyond that which would normally be provided in a journalistic context.

 

The ad hoc process by which fact-checking and content takedowns takes place, misses the nuance of the judgements made when publishing a story. It enables fact-checkers to condense peripheral concerns into a top line accusation suggesting that: the Guardian is misleading/producing clickbait on the environment. The current system enables fact checking organisations to post articles claiming issues with a legitimate story, without putting any of the claims in that post to the journalist or the news publisher more broadly. The impact of these claims can be significant, leading to the article appearing lower in Facebook News Feed and may be labelled as misinformation. The effect of such accusations on public perceptions of our journalism, and the trustworthiness of mainstream science journalism more broadly, is a significant concern. The onus to correct the labelling of articles by Facebook, is for GNM to contact the fact-checking organisation in order to issue a correction, or to dispute a rating. Fact checking organisations can, therefore, wield a lot of power over the credibility of individual news articles and, over time, news sources themselves. Yet the basis on which an organisation is given official fact-checking status by Facebook, is unclear. Facebook has, for example, designated right-wing sites such as the ‘Daily Caller’ into a position of fact-checker, to make these sorts of decisions.[82]

 

It is clear that processes of fact-checking and content takedown on online platforms with strategic market status, must be backed by accountable processes and ‘real world’ details like a postal address and with named individual platform employees involved rather than simply automated systems. The independent regulator, proposed in the online harms legislation, could play a role where news publishers believe that the appeals body of the platform has made a substantive error in its case review.

 

In relation to individual speech acts made by platform users, it is vital that online search and social platforms should back up decisions about the enforcement of their terms of use with a clear, fair and well-governed system of appeal, redress or customer service, as appropriate.

 

 

  1. What model of legal liability for content is most appropriate for online platforms?

 

In relation to illegal harms, we agree that platforms should have clear obligations to take steps and build systems that counter those harms. That includes using technology and systems to prevent those harms from occurring, but as outlined earlier in this submission, there remains a role for human moderators - working in good conditions - to make judgement calls about whether to take down (or leave up) content. The fact that a requirement for human moderation of content, in order to assess that content against clear terms of service, would require significant resource investments by those platforms, is irrelevant. It is worth noting that the CMA has found that in the UK alone, “Google earned £1.7 billion more profit in 2018 than the benchmark level of profits. For Facebook, the comparable figure for 2018 was £650 million.”[83] The central focus of changes to liability, and the creation of regulatory systems, should be to shift the burden of responsibility from the reporting of content by users, to platforms treating the prevention of illegal harms as a central focus to which they apply all available proactive strategies.

 

Harassment, cyber-stalking and hate crime represent a subset of illegal activity that all raise important free speech issues but are, as a matter of practicality and law, difficult to differentiate from legal but harmful activities such as cyberbullying, trolling and extremist rhetoric and material. Special care must be taken by policymakers to ensure that there are clear definitions and dividing lines between illegal and legal categories of harm. Parliament should play a role in closely scrutinising the definitions of illegal and legal but harmful content, to ensure that there is a strict divide in place between how these different harms are treated.

 

To regulate illegal harms, the online harms framework is right to place clear legal obligations on platforms to design products and services that comply with a series of state-backed codes of practice. This may be the intention of the term ‘duty of care’ within the online harms white paper. The government should be mindful, however, that this term can be confused and conflated with the tortious duty of care. Such a confusion could give rise to a new cause of action, which could enable individuals to bring claims in negligence against internet companies for hosting content to which an individual objects. This would set the bar to claims exceptionally low, as everyone has a different, subjective view, on the content that they find objectionable. It is not clear if the creation of a new tort is the government’s intent, when it states, in the recent white paper, that the legislation will “provide evidence and set standards which may increase the effectiveness of individuals’ existing legal remedies.”[84] If this is the government’s intention, this would be a huge legal change, and would have huge implications for freedom of expression online.

 

One key issue that falls for resolution is whether and how social media platforms - many of which are ultimately based in the US - should be accountable legally in the national jurisdictions in which they operate. For many years, social media companies in the UK have ducked and dived operating under the same civil law liabilities that traditional news publishers do - which has allowed them to remain unaccountable in a legal context, but which has also given them a commercially competitive advantage. They can operate under their own terms of business, which gives them a freedom media organisations like the Guardian will never be able to compete with.

 

The online harms framework should contain sufficient checks and balances on the face of the bill, including safeguards for freedom of expression, and should be independent from undue political interference. Any state-backed regime of content takedown must be complemented by a parallel process to ensure that individuals can appeal where the decision (e.g. to take down content, delete accounts, or curb online activity) is overzealous or wrong.

 

  1. To what extent should users be allowed anonymity online?

 

Is it increasingly difficult to know whether the actors in online debates are who they say they are, or whether they have clear and transparent motives. Research by the Oxford Internet Institute’s (OII) computational propaganda unit looking at how so-called “cyber troops” work to disrupt, distort and dissuade citizens from using online platforms to debate key policy issues.

 

The report by the OII found that cyber troops are a “pervasive and global phenomenon. Many different countries employ significant numbers of people and resources to manage and manipulate public opinion online, sometimes targeting domestic audiences and sometimes targeting foreign publics.” Furthermore, the viral nature of platform algorithms that are used to engage users within the walls of these platforms, can be used by small numbers of individuals - and organisations presenting themselves as individuals - to strategically share content that they wish to see dominate share of voice on a platform. The impact is to create a disproportionate impact on the presentation of the popularity and credibility of particular viewpoints on a given issue. The rise of Ben Shapiro and the Daily Wire on Facebook, aided by “a coordinated network of right-wing Facebook pages, all run by the same owner, that share Daily Wire posts ten or more times a day[85], demonstrate how these viral networks can be used to promote an ideological world view on the world’s most powerful social media platform.

 

A recent note by Enders Analysis summarised this phenomenon, stating that “a disproportionate share of engagement is concentrated on a very small group of users, able to quickly distribute content through likes and shares to large audiences, particularly if they also have a large list of friends… Because Facebook groups are able to promote content outside their page, small groups of highly engaged users, either acting on their own out of political reasons or working for a political organization, can play an outsized role in increasing the relevancy of news and opinion posts on the Facebook News Feed. False amplification can lead to real social amplification… Analysis by Buzzfeed revealed that a core group of 559 users were responsible for almost 1 in 10 likes on the Britain First Facebook page, out of a total of 350,000 users who liked at least one post over a six-week period. Illustrating the international networks that Facebook both mirrors and enables, a third of these users live outside the UK.”

 

  1. How can technology be used to help protect the freedom of expression?

 

It is vital that the government makes clear that the internet, as a medium, has the potential to facilitate vital activities in a democracy, notably the right to access news and information, the right to freedom of expression, and the right to privacy. Technology can be very helpful in the pursuit of these goals, for example, by enabling filters that cut out spam from our below the line common threads[86], and by combating attempts to disrupt key democractic processes[87]. It can also be used commercially, to prevent certain forms of bad advertising from being served on websites and digital services.

 

The balance, when considering the appropriate use of technology, is how you use technology to spot potentially illegal speech acts, to filter those acts for review by a human, who may have a greater capacity to see context, and make subjective judgments. There is also an economic aspect to this - obviously some of this is labour intensive - but as we note in response to question 5, the CMA has recently found that Google and Facebook are currently operating with huge profit margins[88], far above the benchmark level of profits that the regulator would expect to see in a market in which there was healthy competition.

 

Any regulatory framework must set standards in relation to harms that are clearly defined as illegal, and should not become a framework that enables a watchdog to punish platforms on the basis of subjective views of harm. Similarly, it is vital that in seeking to make online platforms accountable for the content and activity that they host, the government does not use this process as a way to deploy an enlarged system of government surveillance on the lawful activity of citizens.

 

While encrypted communications have the potential to enhance user privacy and prevent surveillance by government agencies, they also have the potential to fragment and isolate individuals. The move to more closed forms of encrypted social media chat apps such as WhatsApp, Snapchat and Parler can, perhaps, be seen as a reclaiming of privacy, away from the public sharing required on Facebook; an establishment of close connections with like-minded people. But at the same time, the closeness of a self-selected group invites more groupthink than ever before and, because these apps are private, they are separated off from the public space, from the civic sphere. Rather than opening our eyes to difference and creating tolerance in the world, social media is creating silos, private spaces, in which we’re separated from each other, in which our views become fixed, immune to the opinions of everyone else.

 

As well as raising competition concerns, plans by Facebook to encrypt all communications between its suite of messaging apps[89], raise questions about the degree to which any external regulator could seek information, or enforce obligations in relation to an online harms framework. As has been noted this week, encryption measures aimed at ‘keeping people safe’, can also reduce the efficacy of measures aimed at reducing the prevalence of illegal acts such child exploitation.[90] It is essential that the encryption of these apps, which are some of the most used apps globally, is not used as a way to circumvent legal obligations in relation to the distribution of illegal content, or to prevent either academic or journalistic scrutiny of malicious activity on those platforms.

 

  1. How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

While GMG welcomes attempts in the online harms white paper to subject platforms to a greater degree of responsibility and accountability for the content and activity that they host, it is vital that the government also recognises that it is the business models at the heart of dominant platforms that should be of equal focus when trying to drive behavioural change.

 

From the Home Affairs Select Committee finding, in August 2016, that platforms were “consciously failing” to tackle incitement and extremism on the web, to the view of the Digital, Culture, Media and Sport Committee, in February 2019, that “big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights”, it is the current structure of the digital advertising market that underpins the revenues of these businesses, and which is responsible for the negative externalities that this white paper is attempting to tackle through external regulation.

 

As noted above, the online advertising market, in its current form, does not incentivise investment in the dissemination of accurate news and high-quality content. It rewards products and services that create attention, emotion and outrage and facilitate opportunities for advertisers to follow audiences across the web at the lowest possible price. It is a market that is opaque by design, preventing advertisers from knowing the nature of the content that their advertising funds.

 

This creates what the academic and author Shoshana Zuboff terms ‘radical indifference’ to content and activity that is hosted by dominant search and social platforms. As Zuboff writes, “Facebook doesn’t care about disinformation, or mental health, or any of the other issues on Zuckerberg’s list of resolutions. Users are not customers, nor are they “the product.” They are merely free sources of raw material. Zuckerberg, Sandberg, and the company’s other top executives are not radically indifferent because they’re evil but because they’re surveillance capitalists, bound by unprecedented economic imperatives to extract behavioral data in order to predict our futures for others’ gain.”[91]

 

Unlike online platforms with SMS, most UK news organisations are subject to three tiers of regulation: primary liability under the law; compliance with an editorial code of practice, and commercial pressure through the transparent and open placement of digital advertising. This latter form of regulation - which forms the basis of the existing system of self-regulation of advertising overseen by the Advertising Standards Authority - is under-estimated in terms of the impact of aligning the brands of advertisers, with the nature and quality of content and journalism that the advert in question is helping to fund. The degree to which advertising no longer acts to prevent platforms from moderating rules that govern the publishing and promulgation of hateful content were evidenced during the recent advertiser boycott of Facebook group companies. As noted about, Facebook CEO, Mark Zuckerberg, has said that Facebook is “not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue”.[92]

 

In tandem with efforts to make platforms more accountable for the content and activity that they facilitate, the government is right to focus parliamentary time and resources on the creation of a new digital markets unity (DMU), under the auspices of the CMA[93]. This new body, which builds on the CMA’s online platforms study, will be empowered to reform a digital advertising market that is opaque by design through a new statutory code of practice. The injection of greater transparency into the online advertising market could provide brands with more insight and control as to where their advertising appears, giving them more leverage to demand that dominant search and social companies proactively invest in safety, security and moderation of content and activity - and therefore significantly improving the quality of the public square.

 

The lack of competition in the search, social and sharing platforms means that consumers, advertisers and publishers are locked in to using dominant providers, who therefore have little incentive to invest in high content standards, such as clear, transparent, accountable systems of content moderation. Again, the pro-competitive approach that the government has agreed to implement through the creation of the DMU could promote greater competition in the digital economy, encourage data portability and greater consumer switching between digital services.

 

It is vital that, as the government seeks to nullify the negative externalities caused by the content and activity facilitated by online platforms with SMS, that it focuses on the underlying drivers of the harms perpetrated on platforms. The danger is that in seeking to tackle these issues through the regulation of speech, there are unintended consequences for the ability of citizens to consume a wide variety of news and information, and for individual citizens to express themselves online. An overly zealous approach to the regulation of speech could not only have a negative impact on the business models of organisations that do invest in high-quality journalism, but on the rights of individual citizens to participate in democracy.

 

Our strong preference therefore, would be for the government to: prioritise the publication of a white paper to finalise the detail required to underpin the creation of the digital markets unit in legislation; to begin the process of designating online platforms with strategic market status now, rather than waiting until the digital markets unit is established and; to use the parliamentary time available to the government to pass the legislation required to establish the CMA’s digital markets unit in parallel with legislation to regulate online harms.

 

 

  1. How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

The algorithms that underpin many of the search and social platforms that are popularly used by citizens in the UK are not designed to further societal aims, informed decision making, or high quality debate among citizens. These algorithms are designed by listed businesses to meet the commercial objectives of shareholders. Algorithms also influence public debate and opinion about key policy issues - a reality that is coming into sharper focus as journalists and academics probe more deeply into the use of highly targeted advertising and viral content served on those platforms.

 

Policymakers are right to seek greater transparency as to how those commercial objectives are achieved through algorithm design. Right too to consider imposing considered changes to those algorithms - especially in relation to online platforms with SMS - that better serve the public interest.

 

As we note in response to question 1, in the US[94], and to some extent in the UK[95], populist politicians have suggested that dominant search and social platforms are somehow biased against news sources on the basis of political outlook. We have seen no compelling evidence to back up these claims.

 

 

  1. How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

User-facing platforms should back up decisions about the enforcement of their terms of use with a clear, fair, transparent and well-governed system of appeal, redress or customer service, as appropriate. This must be backed by accountable processes and ‘real world’ details like a postal address and with humans involved rather than simply automated systems. The independent regulator could play a role where users believe that the appeals body of the platform has made a substantive error in its case review.

 

The Government’s online harms process provides the opportunity to place proportionate obligations on dominant search and social companies to take greater responsibility for the content and activity that they facilitate.

 

        First, ensure that platforms should communicate clear and transparent terms of use with users setting out the policies and procedures in place to moderate legal but harmful content.

 

        Second, a clear policy setting out the appeals processes that all users can access in order to report and appeal against specific instances, where they feel that platforms have acted contrary to their stated policies.

  1. To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?

 

See our response to question 7.

 

  1. Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

 

In December, the European Commission unveiled a new European Democracy Action Plan[96] as part of the proposed Digital Services Act designed to empower citizens and build more resilient democracies across the EU. The plan sets out measures to promote free and fair elections, strengthen media freedom and counter disinformation, as well as measures to tackle safety of journalists and present an initiative to protect them from strategic lawsuits against public participation (SLAPPs). In addition to this, the Commission will steer efforts to overhaul the existing Code of Practice on Disinformation, strengthening requirements for online platforms and introducing vigorous monitoring and oversight.

 

The final judgment in the Delfi AS v. Estonia case serves as an interesting case on freedom of expression online. It concerned a possible clash between two competing human rights: the freedom of the applicant internet news portal to impart information (protected by Article 10 of the Convention) and the right to privacy of the victims of the unlawful speech of its users (protected by Article 8 of the Convention). The European Court of Human Rights (ECtHR) ruled that Estonia did not breach Article 10 of the European Convention on Human Rights (ECHR)[97] when it held the online news outlet, Delfi, liable for defamation based on comments posted in the comments section of its articles. The ECtHR found that Estonia had interfered with the outlet’s right to free expression when it imposed civil penalties for the defamatory comments. Human rights groups and large media companies closely followed the case and opposed any departure from the European Union’s E-Commerce Directive. In response to the case, non-profit Access Now said the ruling “created a worrying precedent that could force websites to censor content. It also creates a perverse incentive for websites to discourage online anonymity and freedom of expression.”[98] However, by contrast in Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (2 February 2016), the ECtHR held that there had been a violation of Article 10 because the Hungarian courts, when deciding on the notion of liability, had not carried out a proper balancing exercise between the competing rights involved. Notably, the Hungarian authorities accepted at face value that the comments had been unlawful as being injurious to the reputation of the real estate websites: although offensive and vulgar, the comments had not constituted clearly unlawful speech.

 

In recent weeks, the Australian government has proposed legislation to give its e-safety commissioner the power to ask tech companies and other online platforms to remove severely harmful, abusive or bullying content within 24 hours or risk being blocked and fined $555,000[99]. A rapid website-blocking power has been added to allow the commissioner to respond to online crisis events such as the Christchurch terrorist attacks, by requesting internet service providers block access to terrorist and extremely violent content for a limited time period. In addition to this, the draft laws will enable the government to unmask identities of anonymous accounts and or fake accounts responsible for online abuse or exchanging illegal content. Similarly, the Network Enforcement Law (NetzDG) which came into force in Germany in early 2018 requires social media companies to exercise a local take down of 'obviously illegal' content (e.g. a video or a comment) within 24 hours after notification. Other illegal content must be blocked generally within seven days of receiving a complaint. At the time Reporters Without Borders Germany and other critics warned the legislation "could massively damage the basic right to freedom of the press and freedom of expression". Reporters Without Borders Germany managing director Christian Mihr said:

 

“With the NetzDG, the federal government has turned private companies into judges on freedom of the press and freedom of information online, without ensuring public control of the deletion process. Such an independent inspection authority is needed, however, to detect over-blocking, i.e. the deletion of legally permissible content, ”said ROG managing director Christian Mihr. “Facebook and Google delete according to their own rules, because they see themselves as private companies and want to enforce a kind of digital domiciliary right. However, their platforms have become part of the modern public so that people there have to be able to say anything that does not break the law."

 

In 2019 the United Nations launched a new strategy to tackle hate speech[100], stating it is “crucial to deepen progress across the United Nations agenda by helping to prevent armed conflict, atrocity crimes and terrorism, end violence against women and other serious violations of human rights, and promote peaceful, inclusive and just societies.”[101] The plan outlined a number of objectives including “monitoring and analyzing hate speech”, “addressing root causes, drivers and actors of hate speech” and “using education as a tool for addressing and countering hate speech”. In December, the UN Special Rapporteur on minority issues, Fernand de Varennes said Facebook’s community standards should be brought into line with the understanding of “hate speech” in the UN Strategy and Plan of Action on Hate Speech. He also said Facebook’s omission of protection for linguistic minorities from hate speech is troubling and contrary to international human rights law.

 

 

January 2021

26


[1]              https://www.echr.coe.int/Documents/FS_Hate_speech_ENG.pdf

[2]              https://committees.parliament.uk/call-for-evidence/312/freedom-of-expression-online/

[3]              theguardian.com/us-news/2021/jan/07/donald-trump-twitter-ban-comes-to-end-amid-calls-for-tougher-action

[4]              https://www.theguardian.com/us-news/2021/jan/08/donald-trump-twitter-ban-suspended

[5]              https://www.protocol.com/facebook-ban-trump-democrats-power

[6]              https://www.theguardian.com/commentisfree/2021/jan/08/us-chaos-britain-fox-news-trump-presidency

[7]              https://www.ft.com/content/e62f30dc-edbd-4037-b6d7-8cc0896bf414

[8]              https://www.cnet.com/news/democrats-and-republicans-agree-that-section-230-is-flawed/

[9]              https://www.theguardian.com/us-news/2020/oct/27/section-230-congress-hearing-facebook-twitter-google

[10]              https://www.theguardian.com/us-news/2021/jan/12/blocked-how-the-internet-turned-on-donald-trump

[11]              https://www.theguardian.com/us-news/2021/jan/11/parler-goes-offline-after-amazon-drops-it-due-to-violent-content

[12]              https://www.theguardian.com/us-news/2021/jan/11/opinion-divided-over-trump-being-banned-from-social-media

[13]              https://www.reuters.com/article/us-amazon-com-pentagon/amazon-urges-judge-to-set-aside-10-billion-cloud-contract-award-to-microsoft-idUSKBN28P31I

[14]              https://www.businessinsider.com/microsoft-google-amazon-pentagon-law-enforcement-contracts-2020-7?r=US&IR=T

[15]              https://www.reuters.com/article/us-oecd-tax-france/france-says-u-s-blocking-global-digital-tax-talks-idUSKBN2602ZC

[16]              https://www.theguardian.com/media/2021/jan/19/us-attacks-australias-extraordinary-plan-to-make-google-and-facebook-pay-for-news

[17]              https://www.nytimes.com/2020/11/05/technology/donald-trump-twitter.html

[18]              https://www.washingtonpost.com/technology/2020/11/09/facebook-twitter-election-misinformation-labels/

[19]              https://www.scribd.com/document/490642497/Parler-s-reply-to-Amazon-s-response

[20]              Ibid

[21]              Ibid

[22]              https://www.contino.io/insights/whos-using-aws

[23]              https://washingtonpost.com/technology/2021/01/13/facebook-role-in-capitol-protest/

[24]              https://mattstoller.substack.com/p/a-simple-thing-biden-can-do-to-reset?r=qr92&utm_campaign=post&utm_medium=web&utm_source=copy

[25]              https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/half-of-people-get-news-from-social-media

[26]              https://www.niemanlab.org/2019/10/more-americans-than-ever-are-getting-news-from-social-media-even-as-they-say-social-media-makes-news-worse/

[27]              https://www.canberra.edu.au/research/faculty-research-centres/nmrc/digital-news-report-australia-2020

[28]              https://cyber.harvard.edu/publication/2020/Mail-in-Voter-Fraud-Disinformation-2020

[29]              Benkler et al note that “Trump has perfected the art of harnessing mass media to disseminate and at times reinforce his disinformation campaign by using three core standard practices of professional journalism. These three are: elite institutional focus (if the President says it, it’s news); headline seeking (if it bleeds, it leads); and balance, neutrality, or the avoidance of the appearance of taking a side. He uses the first two in combination to summon coverage at will, and has used them continuously to set the agenda surrounding mail-in voting through a combination of tweets, press conferences, and television interviews on Fox News. He relies on the latter professional practice to keep audiences that are not politically pre-committed and have relatively low political knowledge confused, because it limits the degree to which professional journalists in mass media organizations are willing or able to directly call the voter fraud frame disinformation. The president is, however, not acting alone. Throughout the first six months of the disinformation campaign, the Republican National Committee (RNC) and staff from the Trump campaign appear repeatedly and consistently on message at the same moments, suggesting an institutionalized rather than individual disinformation campaign. The efforts of the president and the Republican Party are supported by the right-wing media ecosystem, primarily Fox News and talk radio functioning in effect as a party press. These reinforce the message, provide the president a platform, and marginalize or attack those Republican leaders or any conservative media personalities who insist that there is no evidence of widespread voter fraud associated with mail-in voting.”

[30]              https://twitter.com/emilybell/status/1346925210415501312

[31]              https://www.theguardian.com/media/2021/jan/16/james-murdoch-says-us-media-lies-unleashed-insidious-forces

[32]              https://inforrm.org/2021/01/15/why-free-speech-needs-a-new-definition-in-the-age-of-the-internet-and-trump-tweets-peter-ives/

[33]              https://www.chronicle.com/article/how-the-right-weaponized-free-speech/

[34]              A multilateral treaty adopted by the United Nations General Assembly on 16 December 1966, and in force from 23 March 1976 - https://treaties.un.org/doc/publication/unts/volume%20999/volume-999-i-14668-english.pdf

[35]              https://www2.ohchr.org/english/bodies/hrc/docs/gc34.pdf

[36]              https://www2.ohchr.org/english/bodies/hrc/docs/CCPR.C.GC.34.CRP.4.doc

[37]              https://www.cps.gov.uk/legal-guidance/racist-and-religious-hate-crime-prosecution-guidance

[38]              https://www.cps.gov.uk/legal-guidance/homophobic-biphobic-and-transphobic-hate-crime-prosecution-guidance

[39]              https://www.cps.gov.uk/legal-guidance/disability-hate-crime-and-other-crimes-against-disabled-people-prosecution-guidance

[40]              https://www.libertyhumanrights.org.uk/human-rights/free-speech-and-protest/speech-offences

[41]              Ibid

[42]              https://www.cps.gov.uk/legal-guidance/social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media

[43]              https://www.andreajenkyns.co.uk/news/andrea-jenkyns-mp-receives-apology-twitter-abuser

[44]              https://www.theguardian.com/politics/2014/sep/29/twitter-online-intimidation-police-stella-creasy-peter-nunn

[45]              https://www.theguardian.com/uk-news/2014/jan/24/two-jailed-twitter-abuse-feminist-campaigner

[46]              https://www.cps.gov.uk/legal-guidance/media-guidance-prosecutors-assessing-public-interest-cases-affecting-media

[47]              For example clause 1 (iv) of the IPSO Code states that “The Press, while free to editorialise and campaign, must distinguish clearly between comment, conjecture and fact.”

              Clause 3 of the IPSO Code on Harassment, states that:

              i) Journalists must not engage in intimidation, harassment or persistent pursuit.
ii) They must not persist in questioning, telephoning, pursuing or photographing individuals once asked to desist; nor remain on property when asked to leave and must not follow them. If requested, they must identify themselves and whom they represent.
iii) Editors must ensure these principles are observed by those working for them and take care not to use non-compliant material from other sources.

              Clause 12 of the IPSO Code on “Discrimination” states that:

              i) The press must avoid prejudicial or pejorative reference to an individual's, race, colour, religion, sex, gender identity, sexual orientation or to any physical or mental illness or disability.
ii) Details of an individual's race, colour, religion, gender identity, sexual orientation, physical or mental illness or disability must be avoided unless genuinely relevant to the story.

[48]              https://www.telegraph.co.uk/technology/google/9109239/Google-users-ignore-major-privacy-shakeup.html

[49]              https://www.theguardian.com/news/2017/may/21/facebook-moderators-quick-guide-job-challenges

[50]              https://www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content

[51]              https://www.theguardian.com/technology/2018/apr/24/facebook-releases-content-moderation-guidelines-secret-rules

[52]              Delfi AS v. Estonia (2015) ECtHR 64669/09 in the European Court of Human Rights (ECtHR)

[53]              https://strasbourgobservers.com/2015/06/18/delfi-as-v-estonia-grand-chamber-confirms-liability-of-online-news-portal-for-offensive-comments-posted-by-its-readers/

[54]              http://hudoc.echr.coe.int/eng?i=001-160314

[55]              http://blogs.lse.ac.uk/mediapolicyproject/2016/02/19/delfi-revisited-the-mte-index-hu-v-hungary-case

[56]              https://www.theverge.com/interface/2019/4/3/18293293/youtube-extremism-criticism-bloomberg

[57]              https://popular.info/p/facebook-fails-georgia?r=qr92&utm_campaign=post&utm_medium=web&utm_source=copy

[58]              https://www.theguardian.com/technology/2019/mar/30/mark-zuckerberg-calls-for-stronger-regulation-of-internet

[59]              https://www.bloomberg.com/news/articles/2019-05-17/facebook-s-sandberg-says-breaking-it-up-won-t-solve-the-problems

[60]              https://www.bbc.co.uk/news/uk-48740231

[61]              https://www.nytimes.com/2020/08/01/business/media/facebook-boycott.html

[62]              https://www.theguardian.com/technology/2020/jul/02/mark-zuckerberg-advertisers-boycott-facebook-back-soon-enough

[63]              https://edition.cnn.com/2020/12/18/tech/unilever-facebook-twitter-advertising/index.html

[64]              R (ProLife Alliance) v BBC [2004] 1 AC 185, at 227-8 (§21)

[65]              https://www.marketingweek.com/2017/09/05/mark-ritson-facebook-tv/

[66]              https://www.avrupa.info.tr/sites/default/files/2020-01/EU%20Human%20Rights%20Guidelines%20on%20Freedom%20of%20Expression%20Online%20and%20Offline%20%282014%29.pdf

[67]              https://www.theguardian.com/uk-news/2020/may/08/coronavirus-surge-stalking-victims-seeking-help-during-uk-lockdown

[68]              https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-3/

[69]              https://www.theguardian.com/media/2019/apr/24/mps-criticise-tech-giants-for-failure-to-report-criminal-posts-twitter-facebook-google-youtube

[70]              https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response#part-2-what-harmful-content-or-activity-will-the-new-regulatory-framework-apply-to-and-what-action-will-companies-need-to-take

[71]              https://www.nytimes.com/2019/05/17/technology/facebook-ai-schroepfer.html

[72]              https://www.theguardian.com/technology/2018/dec/04/google-facebook-anti-conservative-bias-claims

[73]              See for example, https://news.stanford.edu/2019/11/26/search-media-biased/ & https://www.economist.com/graphic-detail/2019/06/08/google-rewards-reputable-reporting-not-left-wing-politics & https://dl.acm.org/doi/abs/10.1145/3274417 & https://www.researchgate.net/profile/Andreas_Graefe/publication/318256136_Burst_of_ the_Filter_Bubble_Effects_of_personalization_on_the_diversity_of_Google_News/links/59e7a4dc0f7e9bc89b5078bd/Burst-of-the-Filter-Bubble-Effects-of-personalization-on-the-diversity-of-Google-News.pdf

[74]              https://www.niemanlab.org/2020/10/two-new-studies-show-again-that-facebook-doesnt-censor-conservatives/

[75]              https://www.newswhip.com/2020/09/top-publishers-facebook-august-2020/

[76]              https://committees.parliament.uk/writtenevidence/12803/html/

[77]              https://developers.google.com/web/tools/lighthouse

[78]              https://publications.parliament.uk/pa/ld5801/ldselect/ldcomuni/176/176.pdf

[79]              https://publications.parliament.uk/pa/ld5801/ldselect/ldcomuni/176/176.pdf

[80]              https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper

[81]              https://climatefeedback.org/evaluation/guardian-article-on-arctic-methane-emissions-lacks-important-context-jonathan-watts/

[82]              https://www.theguardian.com/technology/2019/apr/17/facebook-teams-with-rightwing-daily-caller-in-factchecking-program

[83]              https://www.gov.uk/government/news/new-competition-regime-for-tech-giants-to-give-consumers-more-choice-and-control-over-their-data-and-ensure-businesses-are-fairly-treated

[84]              https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper

[85]              https://www.thedailybeast.com/shadowy-right-wing-network-behind-ben-shapiros-facebook-success

[86]              https://www.theguardian.com/community-faqs

[87]              https://blogs.microsoft.com/on-the-issues/2020/10/12/trickbot-ransomware-cyberthreat-us-elections/

[88]              https://www.gov.uk/government/news/new-competition-regime-for-tech-giants-to-give-consumers-more-choice-and-control-over-their-data-and-ensure-businesses-are-fairly-treated

[89]              https://www.theguardian.com/technology/2019/jan/25/facebook-integrate-instagram-messenger-whatsapp-messaging-platforms

[90]              https://www.theguardian.com/technology/2021/jan/21/facebook-admits-encryption-will-harm-efforts-to-prevent-child-exploitation

[91]              https://www.fastcompany.com/90303274/why-facebook-and-google-wont-change

[92]              https://www.theguardian.com/technology/2020/jul/02/mark-zuckerberg-advertisers-boycott-facebook-back-soon-enough

[93]              https://www.gov.uk/government/publications/government-response-to-the-cma-digital-advertising-market-study

[94]              https://www.theatlantic.com/ideas/archive/2019/07/conservatives-pretend-big-tech-biased-against-them/594916/

[95]              https://www.dailymail.co.uk/news/article-7605265/Google-facing-claims-anti-Brexit-bias-web-searches.html

[96]              https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2250

[97]              https://www.echr.coe.int/documents/research_report_internet_eng.pdf

[98]              https://www.accessnow.org/worrying-setbaceuropean-court-delfi-decision-for-online-free-expression-and-innovation/

[99]              https://www.theguardian.com/society/2020/dec/23/coalition-bill-would-block-online-platforms-still-hosting-harmful-content-24-hours-after-takedown-notices

[100]              https://www.un.org/en/genocideprevention/documents/UN%20 Strategy%20and %20Plan%20of%20Action%20on%20Hate%20Speech%2018%20June%20SYNOPSIS.pdf

[101]              https://www.un.org/en/genocideprevention/documents/UN%20Strategy%20and %20Plan%20of%20Action%20on%20Hate%20Speech%2018%20June%20SYNOPSIS.pdf