{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Simkins LLPwritten evidence (FEO0024)

 

House of Lords Communications and Digital Committee inquiry into freedom of expression online

 

This submission has been prepared by members of the Reputation team at Simkins LLP, a media and entertainment law firm based in London. We primarily represent claimants in reputation matters. We have significant experience representing and advising clients with respect to the laws of defamation, misuse of private information, breach of confidence and data protection, as well as injunctive relief applications and online issues. Our reason for submitting evidence is to illustrate the issues faced by claimants and their lawyers regarding the boundaries of free expression online and to set out why, drawing from our experience, we consider that (a) balance is necessary when protecting competing rights online, and (b) the current legal and/or regulatory environment does not provide individuals with sufficient protection against damaging online expression. We have only answered those questions where we believe we can share potentially useful insights based on our experience.

 

1.              Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

We consider that while freedom of expression may be under threat online in certain jurisdictions, or never really existed in those jurisdictions, this is definitely not the case in England and Wales. In some jurisdictions, censorship and other deterrents on freedom of expression online clearly have a significant effect on the extent to which individuals are able to express themselves and/or access different viewpoints and information. That is not the case here, as the balance is broadly correct, and interference is minimal. However, whilst we don’t profess to have all the answers, our experience demonstrates that there is a serious issue that must be grappled with, as social media companies have been unable or unwilling to reach the correct balance between the competing fundamental rights, and remedies for those impacted by such failures have been inconsistent and/or insufficient.

 

Whilst we of course believe in freedom of speech, there is a risk that too much emphasis is placed on the right of freedom of expression, suggesting that it should take precedence over other competing fundamental rights. We consider that this would be dangerous and indeed wrong. The law is that these rights are of equal importance and that they must be balanced against one another when they are in conflict. It follows that in the eyes of the law, freedom of expression should rightly be protected, but it should not take precedence over other competing rights, unless the rights are first properly balanced against one another.

 

A lack of suitable controls on freedom of expression, for example to prevent dissemination of harmful and/or unlawful content, plainly has the potential to have a significant adverse effect on individuals’ rights, including (among others) their health, their right to privacy, data protection rights, reputation, and their copyrights and other intellectual property rights.

 

Absent any appropriate controls on free speech, there is a risk that individuals who are so minded may spread – by way of example - misinformation (so-called ‘fake news’); harmful ‘deepfakes’; and potentially dangerous conspiracy theories.[1] It has been widely reported that such misinformation has already been deployed in order to interfere with national democratic elections[2] and, currently, there is a risk that it may detrimentally impact on the Covid-19 vaccination programme.[3]

 

In our view, people should broadly be free to publish within the current confines of the law but, at present, we believe that the law is not sufficiently equipped, nor quick enough, to deal properly with, for example, the dissemination of misinformation. In our view, social media companies must be required to develop better systems and practices: ones that provide a much safer and less hostile environment; ones that attempt to prevent offending material from being published in the first place; and, ones that swiftly remove any such material that gets through its systems when it is brought to their attention. The current systems and practices are insufficient in many ways.

 

We appreciate that there is unlikely to be a system that can recognise all types of offending material automatically and preclude it from a service before it is even published, and, there is of course a fundamental problem in that laws are different in each jurisdiction, so the key is likely to be a combination of the following:

 

(a)     requiring faster rectification by the social media companies, so that they must deal swiftly with a complaint that has been properly brought to their attention (meaning within an hour or so) and not simply, as they often do when faced with content that they are unable to immediately ascertain the veracity of, choose to fall on the side of freedom of expression;

 

(b)     greater sanction for serious offences, both for the social media company that fails to rectify the position properly and expeditiously, and also, for the individual(s) who posted the unlawful content; and

 

(c)      much more emphasis being placed on identifying the origin and qualitative value of the information, as a fundamental issue with online information is that it can be difficult to identify false information/conspiracy theory/opinion from factually correct and well-informed/researched information.

 

This is how freedom of expression will ultimately be best protected: by holding social media companies to a much higher standard, and, by ensuring that those individuals, groups or organisations intentionally posting or causing to be posted misinformation and falsities (as well as other information that infringes the rights of individuals, such as private information or personal data) are mainly prevented from doing so in the first place, but, if they have nonetheless done so, that proportionate and swift sanction results.

There are some important differences between exercising freedom of expression online and offline, not just because exercising free speech online has the potential to be permanently and readily available, and to proliferate and to reach a much larger audience, but also because of the ease with which content may be posted anonymously online, its veracity (which can often be questionable), and, because it has gone through few or no qualitative controls.

 

3.              Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

In our view, insofar as unlawful online content is concerned, user-generated content is broadly covered adequately in principle, in that it is subject to the existing legislation and case law relating to (among others) the laws of defamation, privacy, data protection, copyright and anti-harassment. If user-generated content is published on a well-established and legitimate platform, it will also be subject to the platform’s terms of service and relevant code(s) of conduct. However, the real issue is whether individuals have a practical remedy available to them, particularly when confronted with inadequate and often plainly unhelpful responses from the social media companies themselves. Unfortunately, we have seen in many cases that they do not.

 

We don’t consider that ‘lawful but harmful’ online content is covered adequately by the existing law. Recent and topical examples of such ‘lawful but harmful’ content include the conspiracy theories about Covid-19 being caused by 5G and vaccine misinformation. We consider that regulation should apply to ‘lawful but harmful’ content, perhaps along the lines of, for example, Section 2 (Harm and offence) of the Ofcom Broadcasting Code and/or through imposing positive obligations on platforms to identify such content and (as appropriate) remove it, qualify it with a warning and/or reduce its prominence.

 

It follows that in practice, we do not consider that the current legal and/or regulatory environment provides individuals with sufficient protection against harmful online publications. For individuals who have had their rights infringed, whether by libellous statements or invasion of their privacy, there is no appropriate remedy which can be practically implemented without significant difficulty, cost and delay. Social media platforms have their own mechanisms for dealing with such issues, but, in our experience, these are inadequate, lack transparency as to their processes and refer primarily to the platforms’ own terms of service, as well as potentially being subject to Californian or other international law. Platforms are typically slow to react to unlawful content, if they do so at all, and it can be difficult and even costly to establish who is behind anonymous postings. Likewise, their responses to ‘lawful but harmful’ content are even less adequate, as is evident from the proliferation of such material online during the pandemic.

 

The key issue from a legal perspective is that social media companies and other website hosts seek to avoid responsibility/liability by relying on the assertion that they ‘do not publish’ the content themselves. Ultimately this is an unsustainable argument in our view. Without the platform the individuals would not have the ability to publish and reach such vast audiences.

In instances where an internet user cannot be found and/or it is unclear who is responsible for the publication of the offending content, social media companies may disavow themselves of liability by removing the material. They also take some legal complaints, for example copyright infringement, more seriously than misuse of private information and/or defamatory allegations. However, when the identity of a poster is known, they all too often choose to leave the offending material online, directing the complainant to pursue the poster directly in order to seek redress.

 

When they refuse to take action, there is little practical recourse and we have recently seen some social media platforms assert that it is only for the court to determine if something is false and defamatory, so the content should remain online. When the law does not provide an adequate sanction or incentive for the social media companies to act reasonably, responsibly and expeditiously, the result is that freedom of (false) expression is effectively being given precedence over individuals’ (and indeed companies’) rights.

 

4.              Should online platforms be under a legal duty to protect freedom of expression?

 

We do not consider that it would be appropriate for freedom of expression to be singled out in a way that could result in it being interpreted as taking precedence over other fundamental rights (in particular Article 8 of the European Convention on Human Rights). It is important to ensure that individuals are able to exercise their freedom of expression online. However, the online space is ripe for violations of individual rights (e.g. privacy, data protection, intellectual property and reputational rights), particularly given the quasi-anonymity often afforded by online platforms. Therefore, platforms should have some legal obligation to protect competing rights equally, and to reach a balance in each individual case. They must not simply be allowed to continue to assert that anything said is of value and should be protected.

 

5.              What model of legal liability for content is most appropriate for online platforms?

 

We consider the most appropriate model to be one whereby legal liability for content shifts squarely to online platforms (i) when they do not act to take unlawful, or lawful but harmful, content down within a reasonable but swift period of time, which may vary depending on the nature of the material and/or the level of harm that might be caused; (ii) where removal is not determined to be appropriate, if they do not upon request flag the material with a warning and/or legal notice after notification; and (iii) if they fail to introduce adequate automated systems to identify unlawful content, prevent it from being published in the first place and remove it promptly if it does get through, particularly where the platform has previously been notified of similar offending content by that user or where the same content is repeated by multiple different users. This would help reduce situations in which one unlawful post is picked up by other users and quickly spread before the original content can be removed.

 

 

6.              To what extent should users be allowed anonymity online?

 

It would certainly make enforcement of legal rights easier if users were not allowed to be anonymous online. One problem with the online world is that it can be very difficult and expensive for the person whose rights have been infringed to even try to limit the damage being caused, often by either quasi-anonymous persons or by those who openly have a disregard for the law.

 

However, insisting on all users being publicly identifiable would be problematic, particularly for those who are based in jurisdictions with questionable regimes, and/or where the risk to the individual is great.

 

We do take the view, however, that consideration should be given to whether platforms could/should be obliged to take steps to verify their users’ identity, perhaps as a condition of a user signing up or remaining as a user, so that they at least hold that verified information confidentially and can be obliged to provide it to third parties in appropriate circumstances (e.g. to law enforcement bodies or pursuant to a Norwich Pharmacal / third party disclosure order). That must not mean however that social media companies would be able to simply shift the burden on to complainants and require them to seek such a court order as they continue to publish the information online. Dealing with the urgent removal of unlawful material and the wish/requirement to identify the individual who posted it must be treated as separate issues if the intention is to truly protect all legitimate legal rights.

 

Part of the problem at present is that users see the online world as somewhere they can express themselves without consequence and are therefore more likely to abuse freedom of expression and publish more harmful content than they would offline. If the extent of online anonymity were somewhat limited, for instance in the manner suggested above, users may be less likely to post the harmful content in the first place.

 

We are now approximately a quarter of a century on from when the internet began to surge in popularity. We believe that maintaining online rights is incredibly important, and that freedom of speech online is one of the reasons why the internet can be a vastly helpful tool for the general good of society. However, balanced against this, a ‘Wild West’ environment where legitimate rights cannot be properly and proportionately enforced must not be allowed to continue forevermore, nor can social media companies continue to act in a manner that ultimately appears to place them above the law, and in which, as is now self-evident, they create and/or amplify an environment which polarises the public against one another in so many ways.

 

 

13 January 2021

 

5

 


[1]              https://www.bbc.co.uk/news/blogs-trending-55355911

[2]              https://www.bbc.co.uk/news/technology-44967650

[3]              https://www.bbc.co.uk/news/55364865