{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Associate Professor Damian Tambini-written evidence (FEO0015)

 

House of Lords Communications and Digital Committee Inquiry into Freedom of Expression Online

 

I welcome the invitation to submit evidence to this important and timely inquiry, and I am grateful to the Committee for highlighting the need for clear thinking on freedom of expression in relation to the questions you raise in your call for evidence. I have been researching these issues for decades now as an academic and policy expert, and I think this Committee can perform a very useful function in clarifying some of the key principles at stake and setting out suggestions for a policy direction for the United Kingdom outside the EU, but in support of values of democracy, human rights and the rule of law. In addition to my position of Associate Professor and Distinguished Policy Fellow at LSE I am currently an advisor to the International Bar Association panel of experts on media freedom[1], and a member of the Council of Europe Committee of Experts on Media Governance and Reform[2]. I have previously acted as an advisor to DCMS.

 

---

 

Summary

 

This is a timely inquiry. It should build upon and reaffirm the international human rights standards for freedom of expression that the United Kingdom has done so much to build.

 

1.         With the involvement of Ofcom in Online Harms regulation, the new regulator should be given a duty, building on its existing media literacy duties, to annually survey key indicators of digital literacy and skills across all age groups.

 

2.         Regulators should maintain a clear distinction between harmful and illegal content.

 

3.         Government should develop its guidelines on government communication to ensure a sufficient firewall between executive power and the censorship functions of platforms.

 

4.         The Press Royal Charter could be updated to include recognition criteria for self-regulation of platforms.

 

5.         Government should balance the incentives to ensure that the most powerful and influential platforms do more to promote truth seeking media based on an ethic of fact checking and trust, and demote anti-social, manipulative or hateful media.

 

6.         Anonymous services should not be outlawed entirely, but their provision could be made conditional on other design features, for example users should be made aware that they are entering a potentially dangerous and/or anonymous space.

 

7.         The deployment of algorithmic filters by larger platforms should be subject to disclosure requirements. The algorithms themselves along with data on what is removed, and on what basis should be disclosed to Ofcom. Ofcom should report annually on the operation of filtering by platforms and should have a power to recommend that filtering (and prominence)[3] algorithms be provided by an independent body.

 

8.         The regulator should establish clear industry wide standards and targets for harm reduction, but engage in a flexible dialogue with platforms themselves about how to achieve them.

 

9.         The committee should consider recommending the structural separation of the recommendation algorithm from the other parts of the business model, as part of a competition settlement.

 

10.    The committee should support content moderation principles, but only as part of a wider settlement, in which self-regulation is subject to independent external audit, as principles alone may be merely symbolic. The operation of content moderation needs to be seen in the context of an overall system of incentives.

 

11.    Regulation should be guided by human rights principles. But the principles as they stand are not sufficient. They need further development and guidance which must involve civil society in legitimate processes.

 

12.    International human rights standards should receive the renewed and unambiguous support of the United Kingdom, and this Committee should clearly indicate to the platforms the relevant standards that they should take into account in developing their content moderation policies.

 

Introduction

 

In public policy debate a number of important confusions and conflicts have arisen recently as regards freedom of expression, for example: between the speech rights of individual users versus the speech rights of platforms and their owners; and about the extent to which freedom of expression guarantees speech rights of users against private restrictions (for example by platform moderation), and the extent to which states have a positive obligation to guarantee or promote freedom of expression. There are confusions between freedom of speech as a human right and press or media freedom which is an institutional right that can be subject to more restrictions and requirements under international law standards than the rights to freedom of opinion and expression. There are conflicts between the international human rights perspective of the Council of Europe and the UN Human Rights Committee (which the United Kingdom has played a leading role in building) which sets out clear principles for how regulation should balance individual speech and the creation of a responsible media[4], and the caselaw under the First Amendment to the US constitution which has no jurisdiction in the UK and promotes a more radical laissez faire approach to online speech. All of these rather fundamental confusions and conflicts are relevant in contemporary policy debate, as the UK negotiates a new position in terms of its own traditions and laws. U.S. First Amendment ideas of freedom of expression tend to be the default position of the online giants, whose interests such a theory of freedom of expression tends to serve, but the international human rights perspective is proving very useful to those seeking to balance users interests with the need for innovation.

 

Conflicts and confusions about free expression are increasingly manifest in relation to the cluster of issues that relate to online harms, liability, competition, and tech regulation. These policy challenges combine to constitute a “wicked” policy problem in which the solution to one part impacts the approach to all the other parts, and it is necessary therefore to adopt some clear principles that will guide public policy strategy over an extended period. This committee should therefore attempt to set out some clear and unambiguous principles expressing the UK approach to freedom of expression, and clarify some of the core constitutional principles that are at stake. Resolving these issues of principle should be central to any government that is committed to protecting and renewing democratic institutions for the digital age.

 

My most recent research focuses on the need for a clarification of these key aspects of the freedoms of expression and media freedom, and in this note I will summarise some of the key arguments in my forthcoming book, whilst addressing the questions you raise in your call for evidence. I will focus on those questions where I have more to say, and where the evidence is in my view clearest.

 

  1. Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

There are many threats to freedom of expression online, but they should be understood in the context of wider changes which in the past two decades have hugely extended the ability of internet users to impart and receive ideas, and of the need to protect rights other than freedom of expression. It is important to adopt a system-wide perspective when examining the trade-offs and conflicts between liberty and harm online.

 

Whist state censorship has been limited under the current liability arrangements in the EU and the US, other forms of censorship and silencing have had a huge impact on freedom of expression online: Harassment, online gender-based violence[5], hate speech and harassment have an impact on the ability of individuals to express themselves online and receive ideas freely. There is evidence that online hatred and harassment effects women and visible minorities, and that intersectional discrimination online constitutes therefore a particularly grave problem[6]. Therefore the relationship between content moderation – which both censors and gives voice – and freedom of expression is complex.

 

Online content moderation can protect speech but it can also be a restriction of speech. Posts can be taken down, but this should occur as the result of a breach of the community guidelines of the relevant platform, and therefore within the scope of the private agreement between platform and user. In legal terms the restriction of speech by a private actor such as a new internet intermediary does not fit neatly into many definitions of breaches of freedom of expression. In a US, first amendment perspective (and that of the main platforms) it is government censorship that speech rights exist to restrict, and therefore platform censorship is not per-se a breach of freedom of expression in the US. In the UK and in international human rights law such as under the European Convention, restriction by private actors may be actionable as a breach of freedom of expression, or states may be seen to have responsibilities to protect the expression rights of users against platform censorship. International human rights law standards emerged later, in response to the collapse of democratic institutions in the twentieth century. These approaches to freedom of expression acknowledge that absolute freedom of expression by the media can be problematic due to the potential for propaganda and censorship also by private actors such as media companies, sometimes in cahoots with the state.

 

When addressing the range of questions addressed in your call for evidence therefore it is important to acknowledge the interrelationships between them. Media systems worldwide are undergoing a rapid phase of change in the longer-term shift from analogue to digital, which has displaced and undermined the institutions – such as the profession of journalism – which previously mediated democratic deliberation. It is important to acknowledge the longer-term concerns that are at stake here, and the extent to which even key concepts like freedom of expression are continuously contested. Thus, rather than focusing on a ‘right’ as though it pre-exists and its essence can be ascertained with the advice of the best lawyers and judges, the work of the committee can be a key step in reconstituting and updating this right, and both building upon and reaffirming the international human rights standards that the United Kingdom has done so much to build.

 

  1. How should good digital citizenship be promoted? How can education help?

 

Rapid shifts to social media have resulted in a crisis of truth and trust online: whilst there is massive engagement in ‘clicktivism’ online, much of this is misinformed, and manipulated. This reflects a crisis in what the sociologist Jurgen Habermas calls the ‘truth-seeking’ public sphere: however imperfect, the institutions of the mass media were effective in generating sufficient legitimacy for twentieth century democracy, by institutionalising minimal media ethics. There is much to celebrate in the levels of engagement and voice online, but we are also witnessing a decline of trust in democracy itself, which can only be addressed at a systemic level.

 

Few would disagree that users and their rights are protected by education and critical digital literacy, but the effectiveness of media literacy policy should not be over-stated, and this should not be an excuse for policy inaction. All too often, education and digital literacy form the only point of agreement in policy debate, because other issues – such as regulation – seem too hard. The result is that too much is demanded of internet users, whether this is parents managing their children’s online lives or adults managing data consent. Digital literacy needs to be better funded, targeted on those that need it, and real support for literacy should be joined up with regulatory oversight and enhancement of competition where this is a realistic objective.

 

Education should not only focus on skills to enable users to be more in control of their online experience. It should enable consumers to make informed choices to switch away from powerful internet platforms with minimal costs to themselves in time, data and security. Media literacy should be ‘joined up’ with competition and consumer protection policy through research, engagement, audit and transparency reporting. It should also focus also on ‘what lies behind’ the speech and moderation of speech, including the extent to which it is governed by an ethics of truth-seeking. Onora O’Neil set out the importance of making the media assessable in her lectures on Trust nearly 20 years ago and this principle remains paramount. With the involvement of Ofcom in Online Harms regulation, the new regulator should be given a duty, building on its existing media literacy duties, to annually survey key indicators of digital literacy and skills across all age groups. It should measure also switching rates and attitudes and provide data on attitudes to harms and safety on an open access to price comparison sites and other researchers.

 

  1. Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

Online behaviour does raise new kinds of harms (examples include revenge porn, electoral manipulation, new forms of stalking and harassment) and laws have needed to be updated to create new offences where previous definitions do not serve adequately to curtail them. It is also necessary to update the overall arrangements and incentives that apply to platforms responsibility to illegal and harmful content. In some cases, online intermediaries should be regulated by a sector regulator in order to reduce harm. This enables a more detailed yet flexible engagement between regulator and regulatee and is the focus of the UK Government Online Harms legislation and also the Digital Services Act which will apply in the EU.

 

From a freedom of expression perspective there are problems with blurring the boundary between the categories of illegal and harmful content. Whilst institutions – such as Ofcom – can be involved in regulating both of these, illegal content (where a platform may be held liable for disseminating content) and content that is lawful but viewed as harmful, and where dissemination may breach the terms of service – the contract between user and platform – constitute a different set of concerns.[7] Having one procedure for dealing with both categories of harm may result in a chilling of legal speech or a lack of protection for legal rights so regulators should maintain a clear distinction between harmful and illegal content.

 

  1. Should online platforms be under a legal duty to protect freedom of expression?

 

A new legal duty to protect freedom of expression is likely to confuse matters when combined with other duties.[8] According to the white paper[9], the new online harms regulator should have a duty to promote free expression but platforms should be under a general duty of care to prevent harm, and harm could include breach of expressive rights, for example through illegitimate takedown or a lack of respect for user rights.  An example of this would be takedown of material on the basis of criteria that do not appear in community guidelines or the user agreement or on the basis of vague or sweeping and unaccountable value judgements. Risk-averse blanket removals of content on the basis of complaints without review to ascertain if content was in fact illegal or in breach of the user agreement would also be a breach of the expressive rights of users. If platforms are being asked to increase self-regulation they should do so on the basis of clear guidelines of a sufficient quality that are clearly accessible to users. They should also be operationally independent from government, and government should develop its guidelines on government communication to ensure a sufficient firewall between executive power and the censorship functions of platforms.

 

Whilst the largest platforms should meet a higher standard of moderation, the public interest may be served also by smaller platforms with either a more liberal approach to speech, or more protective moderation, as long as they were sufficiently transparent with users. A sweeping general duty applied to all platforms to remain open to all forms of speech could prevent the provision of social media services that consumers – for example children or minority groups might need in order to provide safe havens for certain forms of speech or voices to emerge free of speech they would consider unwelcome. Whilst society as a whole benefits from free expression, individuals also should be free to develop protected spaces with a high level of speech protection should they so wish.

 

For larger platforms, high grade self-regulation, monitored and enforced by a body independent of government (in a model for example of the Royal Charter on the Press, which sets out criteria for recognition of self-regulatory bodies). The Press Royal Charter could be updated to include recognition criteria for self-regulation of platforms.

  1. What model of legal liability for content is most appropriate for online platforms?

 

Liability shields (section 230 of the CDA in the US, or Article 14 of the Ecommerce Directive) have been crucial to the development of the internet. Some have argued that they made the internet[10]. There is an increasing consensus that they should be earned: they should be conditional on ethical self-regulation[11] but this must be guaranteed separation from centres of institutional power and particularly the state. In the context of Brexit, the United Kingdom is ultimately now responsible for setting its own liability framework for online platforms. In doing so government should expressly set out to balance the incentives to ensure that the most powerful and influential platforms do more to promote truth seeking media based on an ethic of fact checking and trust, and demote anti-social, manipulative or hateful media. This can only be done by an independent regulator held to high procedural standards of transparency and openness.

 

  1. To what extent should users be allowed anonymity online?

 

Future regulation, like the last generation of regulation will be based on a graduated approach to content, with more onerous responsibilities for harm reduction being applied to the most powerful and economically dominant platforms for speech. Anonymity is not a binary question, but there are degrees of anonymity ranging from a platform which through design enables users to mask and encrypt, through platforms that incorporate legal or other procedures for revealing of real identities, to active enforcement of transparent ‘real name’ policies that are actively policed by platforms using authentication, payment or other systems. Given the graduated approach it may be possible in a future scenario to imagine the existence of some smaller ‘mask and encrypt’ platforms that are used by relatively small numbers of users (indeed in a global internet it may be difficult to prevent it), and there are good arguments, deriving from JS Mill’s original argument for the truth value of free speech, to suppose that there is a social benefit in such speech.

 

Thus, as online harms regulation takes shape, anonymity will be one aspect of design that can be taken into account by the regulator, Ofcom, as part of a wider assessment of whether harm reduction had been taken sufficiently into account by the designer of a communications service. Anonymous services should not be outlawed entirely, but their provision could be made conditional on other design features, for example users should be made aware that they are entering a potentially dangerous and/or anonymous space.

 

  1. How can technology be used to help protect the freedom of expression?

 

Technology alone cannot solve the problem, but the way technology is deployed and designed has deep impact on rights, and more can be done to encourage tech companies to reflect on the impact of their design choices for users.

 

Freedom of expression relies on appropriate balancing between a right to free expression and justified restriction of expression, where the exercise of that right infringes other rights or the rights of others. The European Convention on Human Rights is clear that restrictions should be subject to a three-part test; they must be proportionate, necessary in a democracy, and prescribed by law. Where possible, a court should make the determination on a case-by-case basis about whether restrictions meet those tests. In practice, and given caveats about private censorship, where proactive monitoring of content by platforms is required, the use of automated monitoring and filtering of large volumes of posts will determine the actual enjoyment of those rights, including whether specific ‘free’ services are in fact available. Freedom of expression – as real, practical experience of users, rather than as a legal or philosophical principle, is the product of a complex interaction of socio-technical, market, liability, user capabilities and other considerations. As new governance arrangements come into play there is the potential for this to undermine freedom of expression, and it is very important that the development of new “multi-stakeholder,” “automated” and “co-regulatory solutions” which are deployed through technology are subject to stringent transparency requirements and public involvement. This is to an extent foreseen in the Digital Services Act framework of the European Union, and the proposals of the MSI REF committee of the Council of Europe[12].

 

Algorithmic transparency is increasingly important, and censorship algorithms are among the most controversial, and the potentially most damaging to trust in democracy. The suspicion that platforms censor voices, races or parties from public debate would be hugely damaging to democracy. The deployment of algorithmic filters by larger platforms should be subject to disclosure requirements. The algorithms themselves along with data on what is removed, and on what basis should be disclosed to Ofcom. Ofcom should report annually on the operation of filtering by platforms and should have a power to recommend that filtering (and prominence)[13] algorithms be provided by an independent body.

 

  1. How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

Platforms can reduce harms in a wide variety of ways, by nudging users towards more trustworthy content, by clear labelling and warning, and by removing illegal and harmful content. Platforms current business model rewards engagement rather than any particular social benefit, and introduction of new duties of care will encourage the development of moderation and design that will prevent harm, including harm to freedom of expression. Platforms should be encouraged to adhere to international human rights principles and standards, for example those of the council of Europe, as they deploy their harm reduction approaches. The regulator should establish clear industry wide standards and targets for harm reduction, but engage in a flexible dialogue with platforms themselves about how to achieve them.

 

  1. How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

It is right to highlight both the censorship and the promotion of content. Decisions by algorithms not only to restrict or demote content, but also to promote or prioritise it, constitute different forms of restriction of expression rights, as restrictions of rights to “impart and receive” ideas. Transparency can be improved through external audit both of criteria and of

A more radical model would be the structural separation of the recommendation algorithm from the other parts of the business model, as part of a competition settlement. I suggested this to this committee in 2018.

 

 

  1. How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

Content moderation systems can always be improved, both in terms of outcomes for users and protection of rights (to freedom of expression, but also to reputation, privacy and protection from crime and other forms of harm). The extent to which outcomes can be improved will always depend on resources and the overall balance of risks and rewards. Services will continue to be offered ‘free’ if they generate net revenue. If platforms are obliged to provide moderation to such high standards of due process, appeal and transparency, it may be the case that they can no longer be provided, or that nobody but a small number of players can provide them. This is what a regulated monopoly looks like: Policymakers need to be aware that by setting standards for moderation and self-regulation high, they are also raising barriers to entry and entering a long-term negotiation with a powerful intermediary that may chose to withdraw popular services or indeed use its powerful communication resources to criticise such a move.

 

Numerous recommendations have been made for improvements to content moderation. These include the Santa Clara Principles[14] which offer a model for moderation standards. The Global Net Initiative[15] sets out the principles whereby private bodies should address rights to freedom of expression. The committee should support these principles, but only as part of a wider settlement, in which self-regulation is subject to independent external audit, as principles alone may be merely symbolic. The operation of content moderation needs to be seen in the context of an overall system of incentives.

  1. To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?

 

This depends how it is done, and public policy should be careful not to put too much unrealistic responsibility on the shoulders of the users, who may lack the skills, resources or even the time to switch. There are significant barriers to switching between major platforms[16] so the competition intervention would have to be a radical one. Only by enforcing data portability (available under GDPR) in user friendly ways that enable porting of the sunk costs of data and counteracting the network externalities that offer such powerful first mover advantages could real switching begin to discipline powerful players through competition. Such far-reaching interventions – which themselves shape not only consumer incentives but the very nature of social networks and friendship groups – have never been tried, and they would create deep and problematic political repercussions. Updates to competition law, and stronger enforcement are an important part of the solution, but they must be seen in terms of a wider regulatory framework, and not in isolation.

 

And even what regulators call ‘ownership separation’ e.g. breaking up a company, a policy option that UK regulators have argued that there is a strong prime facie case for[17], may not result in significant improvements to consumer welfare if the companies that are thus formed simply repeat the approach of the parent company. Regulation should set out clear incentives not only to improve rates of switching and to make switching better informed, but to set standards for harm reduction and protection of users’ rights. As the former UN rapporteur on Freedom of Expression argues, this approach should be guided by human rights principles[18]. But the principles as they stand are not sufficient. They need further development and guidance which must involve civil society in legitimate processes.

 

  1. Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

 

As a matter where fundamental rights are engaged, which involve platforms that offer similar services in a wide range of countries, multilateral action, rather than ad-hoc examples of policy in individual countries should be developed and there are multiple arenas where this important work is taking place.

 

Existing inter-governmental bodies such as the UN Human Rights Committee, the UN special rapporteur on freedom of expression, the OSCE Special Representative for freedom of the media and UNESCO have done important work setting out high level principles. The UK has played a leading role in establishing a developing range of international treaties such as the Universal Declaration of Human Rights and the European Convention on Human Rights, both of which offer a developed framework of international human rights law which provides a good starting point and framework for government policy and for self-regulation in this area. The Council of Europe has developed a number of interrelating standards that increasingly apply to internet intermediaries. At a time when democracies around the world face multiple challenges, such standards offer a legitimate set of institutions around which international policy can be organised. These international human rights standards should receive the renewed and unambiguous support of the United Kingdom, and this Committee should clearly indicate to the platforms the relevant standards that they should take into account in developing their content moderation policies[19]. The FCO Media Freedom panel and the Media Freedom pledge should be updated with such an aim in mind.  Initiatives such as the international grand committee initiated by the DCMS select committee could be renewed with more stable terms of reference and a clear mandate to pursue the development of international human rights standards and their application to communications governance.

 

The current policy settlement in both the EU and the UK itself generates strong concerns for freedom of expression. It will require radical changes which will be objected to on the basis of vague commitments to an undefined “freedom of expression”. It is important that the process of coming to a new institutional settlement is itself legitimate, transparent and open. If not, it simply will not be trusted, as the debate about trusted flagging of Twitter posts – and indeed the banning of a prominent Twitter user in the US attests. [20]

 

This committee is right to raise these issues for debate. Whilst there is no need to reinvent the wheel there is an opportunity for UK leadership in ensuring a new global agreement among democracies about standards on freedom of expression and the rule of law online. The Committee should reaffirm existing UK and multilateral human rights commitments at a time when they are globally under threat, and it should clarify to platforms that they should observe these, rather than US first amendment standards as they approach a global settlement on their governance.

 

 

January 2021

 

12

 


[1]              https://www.ibanet.org/IBAHRISecretariat.aspx

[2]              https://www.coe.int/en/web/freedom-expression/msi-ref

[3]              Mazzoli and Tambini. (2020). Prioritization Uncovered. https://www.coe.int/en/web/freedom-expression/-/discoverability-of-public-interest-content-online

[4]              See UN HRC General Comment 34 on Article 19: Freedoms of opinion and expression. 2011. https://www2.ohchr.org/english/bodies/hrc/docs/gc34.pdf

[5]              https://webfoundation.org/2020/11/the-impact-of-online-gender-based-violence-on-women-in-public-life/

[6]              The evidence basis of the extent and impact of harassment on expression is patchy, but even research based on data from pre 2016 that finds overall levels of harassment are low, and not directed at women, suggests that women are particularly likely to be discouraged by online harassment and hate from expressing themselves. See: Nadim M, Fladmoe A. Silencing Women? Gender and Online Harassment. Social Science Computer Review. July 2019. doi:10.1177/0894439319865518

                https://journals.sagepub.com/doi/full/10.1177/0894439319865518

[7]              I discuss this further in this article in the Journal of Media Law: https://www.tandfonline.com/doi/full/10.1080/17577632.2019.1666488 see also https://www.fljs.org/content/reducing-online-harms-through-differentiated-duty-care-response-online-harms-white-paper

[8]              Under the European Convention on Human Rights states should ensure private actors do not infringe free expression. According to the Ruggie Principles on business and human rights, all businesses should Protect, Respect and Remedy breaches of human rights.

[9]              https://www.gov.uk/government/consultations/online-harms-white-paper

[10]              See Kosseff, Jeff. (2019). Twenty-Six Words that Created the Internet. See https://www.wsj.com/articles/the-twenty-six-words-that-created-the-internet-review-protecting-the-providers-11566255518

[11]              See Kornbluh and Goodman (2019) Goodman (2019) https://www.gmfus.org/publications/section-230-communications-decency-act-and-future-online-speech

[12]              Recommendations will be published in 2021. https://rm.coe.int/tor-e-terms-of-reference-msi-ref/1680998a01

[13]              Mazzoli and Tambini. (2020). Prioritization Uncovered. https://www.coe.int/en/web/freedom-expression/-/discoverability-of-public-interest-content-online

[14]              https://santaclaraprinciples.org/

[15]              https://globalnetworkinitiative.org/

[16]              See Chapters 1 and 2 of Digital Dominance, Oxford University Press 2018. Available open access via the link on this page: https://global.oup.com/academic/product/digital-dominance-9780190845117?lang=en&cc=gb

[17]              See the Competition and Markets Authority Advice from December 2020: “A new pro-competition regime for digital markets”. https://www.gov.uk/cma-cases/digital-markets-taskforce

[18]              https://www.project-syndicate.org/commentary/content-moderation-digital-harms-regulation-by-david-kaye-and-jason-pielemeier-2020-12

[19]              Notably Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries  (Adopted by the Committee of Ministers on 7 March 2018 at the 1309th meeting of the Ministers' Deputies). https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=0900001680790e14; Recommendation CM/Rec(2016)5[1] of the Committee of Ministers to member States on Internet freedom  (Adopted by the Committee of Ministers on 13 April 2016 at the 1253rd meeting of the Ministers’ Deputies).  https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=09000016806415fa; see also Recommendation CM/Rec(2016)1 on protecting and promoting the right to freedom of expression and the right to private life with regard to network neutrality, Recommendation CM/Rec(2015)6 on the free, transboundary flow of information on the Internet, Recommendation CM/Rec(2014)6 on a Guide to human rights for Internet users, Recommendation CM/Rec(2013)1 on gender equality and media, Recommendation CM/Rec(2012)3 on the protection of human rights with regard to search engines, Recommendation CM/Rec(2012)4 on the protection of human rights with regard to social networking services, Recommendation CM/Rec(2011)7 on a new notion of media, Recommendation CM/Rec(2010)13 on the protection of individuals with regard to automatic processing of personal data in the context of profiling, Recommendation CM/Rec(2007)16 on measures to promote the public service value of the Internet, as well as the 2017 Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data, and the 2008 Guidelines for the cooperation between law enforcement and internet service providers against cybercrime.

[20]              The Council of Europe is currently drafting some useful guidance on precisely this point. https://rm.coe.int/publication-content-prioritisation-report/1680a07a57