Written evidence submitted by Daphne Keller, Director of Program on Platform Regulation (OSB0057)

 

I submit these comments as Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center. In my capacity as a program director at Stanford, I have studied and published analysis of Internet laws around the world.[1] Before coming to Stanford in 2015, I served as Associate General Counsel to Google.

                           

Although I have been unable to engage closely with the process that produced the most recent proposed UK Online Safety Bill (UKOSB), I have had the opportunity to discuss it with a number of UK government representatives, as well as OFCOM personnel. All of these individuals demonstrated care, intelligence, and expertise in their thinking about the proposed law. Nonetheless, the proposal itself leaves a great deal to be desired. As drafted, it would move important questions out of appropriate public purview, creating both unreasonable demands on OFCOM and expansive powers for the Secretary of State. Because these issues are so consequential for the future of human rights and the Internet in the UK, I believe this choice is ill-advised.

 

1.              Institutional Competence

 

Lawmakers around the world are confronting the same conundrum in platform regulation: Setting clear rules in advance risks establishing a legal framework that is insufficiently adaptive to changing technologies and platform practices. But failing to set clear rules, and relying instead on flexible standards to be interpreted and applied later, will lead to significant legal uncertainty. The harms from legal uncertainty are likely to fall particularly heavily on Internet users (whose rights will be burdened by over-cautious platform removals)[2] and on smaller Internet platforms (which cannot afford to risk costly fines or build expensive internal compliance systems, as their larger competitors can).

 

The UKOSB’s proposed solution, deferring many matters to the future judgment of an expert regulator in conjunction with the Secretary of State, is an understandable one. But it goes too far in displacing courts and legislators themselves from their longstanding responsibilities, including those tied to human rights.

 

a.               Defining categories of legally favored or disfavored expression

 

The UKOSB effectively establishes new legal restrictions on expression through its mechanisms for Category 1 hosting platforms to take action against previously lawful but “harmful” content.[3] At the same time, it creates new categories of legally favored expression, including “content of democratic importance.” While the UKOSB provides a lengthy descriptions of some of these terms, their real-world meanings remain to be determined under processes that leave little further room for legislative debate or revision.

 

Words like “harm” or “safety” have of course been used before in other legal contexts, ranging from physical injury torts to the regulation of broadcast or cable television. But even terms familiar from media regulation are inapposite here. Older media rules were designed for a world in which a handful of privileged speakers transmitted content to a largely voiceless audience. They can hardly be suited to govern ordinary people’s daily communications with friends and peers via social media platforms today. Whatever harm-based rules come to govern this ordinary online speech, they will not be rules that we have seen before in law. Yet as the UKOSB stands now, they will also not be the product of full and fair debate by democratically accountable legislative institutions.

 

The idea that currently legal content may cause more harm when distributed or amplified by major platforms, and hence be eligible for additional legal restriction, does merit public attention. As I discussed in a recent article, I am not entirely unsympathetic to it.[4] The UK is to be commended for clearly stating a regulatory goal that too often goes unspoken: the restriction of currently lawful speech. But laying the foundation for such unprecedented rules, while limiting Parliament’s future role to swift up-or-down votes on standards devised by OFCOM or the Secretary of State, effectively cedes responsibility and accountability for an unprecedented overhaul of expression and information law.


b.               Resolving disputes about particular expression

 

The UKOSB proposes no adequate mechanism to develop a “common law,” regulatory precedent, or interpretive canon for the new speech rules it will bring into effect. Legally mandated decisions to silence or promote particular posts, essays, videos, songs, and human expression of every kind will be made almost entirely by private platforms. Affected people and publishers will have little or no recourse to OFCOM, courts, or any other accountable public institution to review or correct platforms’ decisions. This lack of case-by-case review, combined with the UKOSB’s overall lopsided incentives for platforms, is a recipe for over-enforcement, harm to human rights, and legal uncertainty for UK Internet users.[5]

 

These entirely foreseeable harms cannot simply be laid at the feet of private Internet platforms. Legislators should be clear about their shared responsibility. Indeed, EU lawmakers were recently rebuked for a similar effort to push legally-mandated decision-making into platform hands. In an Opinion to the CJEU, that Court’s Advocate General concluded that "the ‘interference’ with the freedom of expression of users is indeed attributable to the EU legislature. It has instigated that interference." The legislature, he said, cannot delegate such a task and at the same time shift all liability to [platform] providers for the resulting interferences with the fundamental rights of users."[6] This principle of state responsibility applies with equal validity under other human rights instruments, including the European Convention, and to the UK as a signatory.

 

c.               Reconciling tensions between new regulatory obligations and existing intermediary liability law

 

A final issue of institutional competence involves the interplay of the UKOSB and the existing, court-administered law of intermediary liability – the law that defines platforms’ legal responsibility for unlawful content posted by users. As I discussed in two blog posts last year, existing law and novel “duty of care” regulatory proposals create competing pressures, telling platforms to be both active and inactive in policing users’ posts.[7]

When plaintiffs prevail against platforms in intermediary liability cases, it is usually by establishing some version of the claim that the platform knew/should have known about or had control over the illegal content at issue in the case. This standard litigation dispute will get very complicated if [a law like the UKOSB] effectively requires a platform to assert more control over user posts, and gain more knowledge about them.

 

Platforms deciding what proactive efforts to undertake while facing both UKOSB’s regulatory obligations and ordinary liability standards in litigation will be in a “damned if you do, damned if you don’t” situation. But legislative efforts to redress this problem for platforms – by providing that compliance efforts cannot be held against them in court – would create new barriers for plaintiffs attempting to assert their own legal rights. I know of no legal model that successfully resolves this tension. As best I can tell, the UKOSB makes no attempt to do so, leaving a perhaps un-answerable question for courts to muddle through.

 

2.              Technology Notices and Major Public Policy Decisions

 

In addition to moving important substantive decisions about freedom of expression out of legislative and judicial hands, UKOSB also delegates key decisions about technology, including automated filters for expression and information.[8] This may ultimately have still greater real-world consequences for Internet users’ human rights. The Technology Notice provisions would allow UK legislators to wash their hands of some of the most difficult policy questions confronting the Internet today. And while the UKOSB provides platforms with an ability to appeal problematic Technology Notices, it provides no such protection for the innumerable individuals, businesses, and organizations that may suffer.

 

The idea that automated filtering technologies can accurately identify and remove unlawful content, including terrorist or violent extremist content, is one that has received considerable attention from civil society, academics, and human rights bodies in recent years. European Union lawmakers considered filtering mandates in their 2021 Terrorist Content Regulation, but ultimately rejected them after widespread objections. A small sample of the concerns raised can be found in the following materials.

 

 

 

Notably, these objections do not turn solely on expression and information rights. Rather, they involve Internet users’ rights to privacy; to equal treatment regardless of race, religion, or other attributes; and to due and fair legal process. Many critics also identified the alarming precedent that terrorism-based filtering mandates in Europe would provide for authoritarian regimes around the world.

 

Numerous civil society groups took particular issue with the possibility that state authorities might mandate use of filtering tools provided by the Global Internet Forum to Counter Terrorism (GIFCT). Those concerns bear noting, because GIFCT seems a likely candidate to serve as an “accredited” filter under the UKOSB. Whatever GIFCT’s pros and cons as a voluntarily deployed tool, though, it is by no means a technology that can accurately identify illegal content. That is not even its purpose. It exists to identify content platforms themselves have flagged for violating their discretionary Terms of Service – not the law. GIFCT filtering works by identifying duplicates or near-duplicates of known images and videos. That technical approach may be reasonable for child sexual abuse material, given such material’s illegality in every context; but it can fail dramatically when applied to terrorist content. Images or videos used by extremists in one context may be re-used for educational, journalistic, and other legitimate purposes elsewhere.[9] YouTube’s removal of the Syrian Archive’s collection of videos evidencing human rights abuses is perhaps the most famous example of this problem. As signatories of the February 2019 civil society letter put it, mandating the use of such tools without robust independent analysis and public debate would be “a gamble with Internet users’ rights that is “neither necessary nor proportionate as an exercise of state power.

 

The idea that speech filters may soon be accurate enough to warrant legally mandatory deployment is further undermined by the ongoing fallout from the EU’s 2019 Copyright Directive. Before that law’s enactment, lawmakers in Brussels were lobbied by vendors of filtering technology, including those claiming to offer affordable and effective automated tools to distinguish legal from illegal content. Lawmakers also heard major platforms’ own rosy assessments of in-house automated tools. An after-the-fact “stakeholder dialogue” once the law had already passed, however, confirmed the more pessimistic predictions long offered by numerous experts, including many in the UK.[10] In that setting, vendors disclaimed any representation that “technology can solve this problem in an automated fashion.” Facebook acknowledged that its filters are “not able to take context into account.”[11] Ultimately, the best that could be said was that filters might achieve tolerable outcomes if coupled with review by human moderators teams that number in the tens of thousands at Facebook or YouTube, but that lie beyond the financial reach of their smaller competitors.

 

Even the idea that armies of human moderators can fix filters’ mistakes, however, has come in for considerable criticism. One prominent academic predicted that human review would likely provide little more than a “rubber stamp” for machines’ decisions.[12] A more recent study concluded that human oversight policies legitimize government use of flawed and controversial algorithms without addressing the fundamental issues with these tools,” providing a “false sense of security in adopting algorithms and enabl[ing] vendors and agencies to shirk accountability for algorithmic harms.”[13]

 

UK lawmakers should not repeat the mistakes of their EU counterparts. Independent fact-finding and debate on a topic as consequential as speech filtering should come before legislation not after. 

             

3.              Competition

 

An important emerging theme in platform regulation involves detailed procedural and transparency mandates for platforms engaged in content moderation. I generally applaud this development. As I told an EU Parliament committee earlier this year, however, overly ambitious requirements of this sort for small platforms create potentially serious conflict with policymakers’ competition goals.[14]

 

Broadly speaking, I believe the UKOSB does a good job of referencing and incorporating concerns about size and proportionality. I do not purport bring the correct economic expertise to say what the right size-based variation in platform obligations should ultimately be. I would urge, however, that any decision-making in this area involve the considered input of competition experts, including the UK’s Competition and Markets Authority.

 

 

 

Submitted 15 September, 2021

Daphne Keller

 

20 September 2021

 

 

 

7


 


[1] See https://cyber.fsi.stanford.edu/people/daphne-keller

[2] For a review of empirical studies on this point, see Daphne Keller, Empirical Evidence of Over-Removal by Internet Companies, Debruary 8, 2021, http://cyberlaw.stanford.edu/blog/2021/02/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws.

[3] If I understand the positions of DCMS and the Home Office correctly, they interpret the UKOSB not to require removal of merely “harmful to adults” content. Written evidence submitted by the Department for Digital, Culture, Media and Sport and the Home Office, Par. 44, https://committees.parliament.uk/writtenevidence/38883/html/. This appears to hinge on the idea that major Internet platforms might simply decline to restrict such content, so long as they make this policy clear to users. Such a decision would drastically conflict with platforms’ own commercial, reputational, and political interests, and as such does not strike me as a realistic possibility.

[4] Daphne Keller, Amplification and Its Discontents, June 8, 2021, https://knightcolumbia.org/content/amplification-and-its-discontents.

[5] Neither the internal appeals mechanism described in Section 15 nor the “super-complaints” mechanism appears to provide a real substitute for individual legal recourse.

[6] Poland v. EU Parliament, Case C401/19, AG Opinion Par. 84, July 15 2021 (assessing copyright filtering requirement).

[7] Daphne Keller, Systemic Duties of Care and Intermediary Liability, May 28, 2020, http://cyberlaw.stanford.edu/blog/2020/05/systemic-duties-care-and-intermediary-liability; Daphne Keller, Broad Consequences of a Systemic Duty of Care for Platforms, June 1, 2020, http://cyberlaw.stanford.edu/blog/2020/06/broad-consequences-systemic-duty-care-platforms.

[8] A second question of enormous policy import involves end-to-end encryption. It appears conceivable that OFCOM’s Technology Notice power could displace legislators on this topic as well. Because this is not my area of specialty, I will not expand upon it, other than to note that the consequences of poor policy choices regarding encryption implicate not only human rights but also national and economic security.

[9] For further discussion see Daphne Keller, Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling, GRUR International, June 2020.

 

 

[10] See, e.g., Senftleben et al, The Recommendation on Measures to Safeguard Fundamental Rights and the Open Internet in the Framework of the EU Copyright Reform, October 20, 2017 (listing over fifty signatories), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3054967; Christina Angelopoulos, On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market, January 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216; Vint Cerf, Tim Berners-Lee et al, Letter of June 12, 2018, https://www.eff.org/files/2018/06/13/article13letter.pdf.

[11] Paul Keller, Article 17 stakeholder dialogue: What have we learned so far?, Jan 6 2020,

https://www.communia-association.org/2020/01/21/article-17-stakeholder-dialogue-day-5-depends/.

[12] Ben Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making

Systems, Policy & Internet (2019), https://doi.org/10.1002/poi3.198.

[13] Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, September 13, 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216.

 

[14] The DSA and DMA - a forward-looking and consumer-centered perspective, May 26 2021,

https://www.europarl.europa.eu/committees/en/the-dsa-and-dma-a-forward-looking-and-co/product-details/20210416WKS03461