{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

DMG Mediawritten evidence (FEO0055)

 

House of Lords Communications and Digital Committee call for evidence on freedom of expression online

 

  1. This submission is made on behalf of DMG Media, publishers of the MailOnline, metro.co.uk and inews websites, and the Daily Mail, Mail on Sunday, Metro and ‘i’ newspapers.

 

  1. As news publishers, freedom of expression is the air we breathe, central to everything we do in supplying the public with reliable, fact-checked, high quality news. It goes without saying that free speech and reliable news provide the bedrock of any democratic society.

 

  1. However free speech and reliable news are not synonymous. Indeed very often they are antipathetic, because there is no freedom of expression if it is limited to the facts and opinions that someone else decrees are accurate, and worthy of being given a voice.

 

  1. It is also the case that everyone is a liberal when they see freedom of expression under threat in a foreign country – far fewer are liberals when it comes to defending freedom of expression in their own country, particularly when it is their own beliefs and assumptions which that freedom has been used to criticise.

 

  1. Our company has been one of the major news publishers in the UK since before the founding of the Daily Mail in 1896, and has always fought to preserve freedom of expression.

 

  1. Ironically the greatest threat to freedom of expression in recent times, as far as UK news publishers are concerned, came with the Leveson Inquiry, in the aftermath of which otherwise liberal opinion was mobilised to attempt to impose on the British press a system of regulation which, however disguised, would have put in place all the basic elements of state control.

 

  1. We have always accepted, and supported, the rule of law. But law is clearly defined, and although it places some very limited restrictions on freedom of expression, it is underpinned by the presumption that all forms of expression which are not proscribed are legal and free.

 

  1. We require all our journalists to follow the Editors’ Code of Practice. Again, this places some restrictions on journalists, but at the same time explicitly protects ‘the fundamental right to freedom of expression – such as to inform, to be partisan, to challenge, shock, be satirical and to entertain’[1].

 

  1. The Code also recognises that facts may be disputed and there may be more than one version of the truth. It therefore does not impose an absolute test of accuracy, but requires journalists to ‘take care not to publish inaccurate, misleading or distorted information’[2] [Our emphasis]. The Code does not proscribe any subjects which cannot be discussed, or versions of the truth which must be adhered to.

 

  1. What is not compatible with freedom of expression is a regime which gives a regulatory body the power to determine that facts or opinions which are otherwise legal, should nevertheless not be published, and that publication should be prevented by the threat of serious penalties.

 

  1. We appreciate there are those who would argue that this is necessary for the good of society – but that is the argument used by the Chinese government. This is why we have such deep misgivings about the government’s proposed online harms legislation, which would place online platforms under a duty of care, underpinned by draconian penalties, not to publish content which is perfectly legal, but still judged harmful. Who will make those judgments? And how long before the definition of harm is extended to anything which is perceived as challenging the orthodoxies – or the government – of the day?

 

  1. For all these reasons we strongly believe online harms legislation is incompatible with the freedom of expression enjoyed by the British press, whether in print or online, since newspaper licensing was abandoned more than 300 years ago. It is vital that legitimate online news content is exempt from this legislation, not only when it is published on news publishers’ own websites, but also when it is distributed by third parties online, including by platforms which are otherwise in scope.

 

  1. This is the context in which we attempt to answer the questions posed in the committee’s call for evidence. We do not pretend of have expert knowledge of all issues surrounding freedom of expression, and limit our response to those questions where we believe our experience as news publishers might be helpful.

 

Executive summary

 

  1. These are the key points we set out below:

 

 

 

 

 

 

 

 

How should good digital citizenship be promoted? How can education help? (Q.2)

 

  1. There is no doubt good work that can be done in educating the public to recognise online content that is malevolent or fraudulent. But extreme care will have to be taken that this does not become politicised, and turn into a system for reinforcing fashionable orthodoxies, at the expense of freedom of expression.

 

  1. Sadly the area of modern British life where freedom of expression is currently under greatest threat is our universities, where academics are regularly denounced for expressing the wrong views, and speakers de-platformed. University media faculties, with a few noble exceptions, tend to be staffed by individuals with strong left-wing views, sometimes coloured by antipathy towards former rivals in journalism. The public should be the judge of what news content is worth reading, not professors of journalism.

 

  1. The same caution must be applied to attempts to ‘rate’ content. There have been numerous projects in this area, none of which has gained wide acceptance. Most appear to be based on American journalistic conventions, and have a strong tendency to rate left-leaning titles above right-leaning ones. We would be happy to provide more information on this if the Committee is interested. 

 

 

Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated? (Q.3)

 

  1. There are numerous existing provisions that address unacceptable online user generated content, subject to the overarching balance of  Article 10 ( the right of freedom of expression ) and Article 8 ( right to privacy) of the European Convention on Human Rights (ECHR), as enshrined in the Human Rights Act:

 

  1. Civil provisions:

 

 

 

 

 

 

 

 

  1. Criminal provisions:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. However the committee is right to ask whether the law is adequately enforced. The answer is that it is not, and the fact that there is strong political pressure for online harms legislation demonstrates that it is not, certainly as far as criminal law is concerned. The three criminal provisions that deal most directly with online harms are:

 

 

 

 

  1. The committee will be aware that the Law Commission recently held a consultation on the reform of the law relating to communications offences[3]. This was concerned with offences committed by individuals rather than the media. However we are concerned that without a complete and robust exemption for journalism, the reforms suggested by the Law Commission could have a serious chilling effect, particularly on investigative journalism. We would be happy to share our response to the Law Commission with the committee if it is interested.

 

  1. That aside, the problem with current law is that it is easy for people with money to instruct lawyers to tackle those on the internet who post content which is damaging or threatening to them. It is different for people without money, whose only recourse is the police. The police do not have the resources to deal with the huge scale of online content and the illegal conduct that often occurs. Harassment (for instance) exists as both a criminal and civil cause of action. Very often, people contact the police about online harassment, only be told by police they not have the time and the resources to deal it. More resources and better training for the police would be at least one answer to this problem.

 

  1. Another might be to provide the public with free – or very cheap - access to civil law. The online platforms are awash with money. They could be required to fund an arbitration scheme that would give individuals the means to take action against people who torment them on the internet. This might be greatly preferable to requiring the platforms to use the algorithms to censor vast swathes of content, much of which may turn out not to be harmful at all.

 

  1. If something is manifestly harmful then it either is, or should be, illegal. The proposed regulation of content that is “legal but harmful” could very easily give rise to lists of prohibitions driven by censoriousness, moralising, or hypersensitivity. What content is “harmful”? Who would decide that, and by which means? Would it be confined to extreme points of view such as the anti-vax movement and QAnon? Even then, why should be people who believe in such things – however irrational – be prohibited from expressing that belief?

 

  1. Asking private companies to police individuals’ communications (including via private channels) and to remove content deemed ‘lawful but harmful’ has clear ramifications for freedom of expression. The threat of huge fines may encourage companies to ‘over-censor’ i.e. to err on the side of caution and remove borderline content. This could lead to opinions which are perceived to be controversial being silenced. It is claimed that ‘harmful’ will be further defined in secondary legislation, but it is unclear how this could be done with anything approaching legal certainty.

 

  1. To give a hypothetical example, there could come a moment when an anti-vaxxer sounds a genuine warning. There are enormous vested interests in the success of Covid vaccines – governments, pharmaceutical companies, the medical establishment, and the public who long for an end to the disease - and they have been approved in record time.  What if certain side effects have not been spotted, or have been ignored? If that were to happen, at present the evidence would very likely begin to appear in posts on user-generated sites like Mumsnet. These are monitored by journalists who, detecting a pattern of adverse outcomes, would begin asking questions of the manufacturers and the medical authorities. But under the online harms regime online platforms may well face very heavy penalties if they surface content which contradicts the government line that the vaccine is safe, and therefore set their algorithms block any such content. And even if there was no blanket ban, how would an algorithm detect the difference between an anti-vaxxer with a bogus message, and a member of the public with a genuine concern, very likely not expressed in precise medical language?

 

  1. There would also be serious Article 10 issues in prohibiting ‘legal but harmful’ content in relation to the current debate of trans issues. Only last month the Court of Appeal overturned the conviction under the 2003 Communications Act of Kate Scottow, who had referred to trans woman Stephanie Hayden as a man and ‘a pig in a wig’. In their ruling the judges said: ‘the freedom only to speak inoffensively is not worth having’.[4] 

 

  1. We have been making these points to the government ever since the Online Harms White Paper was first published. We welcome the fact that the Government’s response to the White Paper consultation makes it clear that news websites will not be in scope under the legislation:

 

Content published by a news publisher on its own site (e.g. on a newspaper or broadcaster’s website) will not be in scope of the regulatory framework and user comments on that content will be exempted.[5]

 

  1. But this does not go far enough. Across the industry just under 45pc of news publishers’ traffic is generated directly by their own websites[6]. More than half is referred by online platforms, and therefore would be in scope and subject to algorithmic censorship. That is not compatible with freedom of expression, or Article 10 of the ECHR.

 

  1. To be fair the government response recognises this and says ‘legislation will include robust protections for journalistic content shared on in-scope services’.[7] However we strongly believe ‘robust protection’ is not sufficient, and have engaged in discussion with the DCMS on how the news publisher exemption can be extended to include legitimate news content when it is distributed by platforms which are in scope. In collaboration with the News Media Association we have presented to the DCMS detailed proposals for how this could be achieved. We would be very happy to share these with the committee if it is interested.

 

  1. But that still leaves the problem of journalists’ source material, much of which these days is first published as user-generated content, whether it is in the form of so-called citizen journalism, or is simply members of the public sharing experiences which have concerned them. As we hope the hypothetical anti-vaxxer example above demonstrates, we fear it will be impossible to achieve this in a way which is compatible with freedom of expression in an open and democratic society.

 

 

 

Should online platforms be under a legal duty to protect freedom of expression? (Q.4)

 

  1. This question presupposes that online platforms do not do this. In fact they have tended to protect freedom of expression as a default position because they are US companies and – understandably –see everything through the prism of the First Amendment, which is far more robust, and more widely observed, than Article 10 ECHR. Further, they succeed as businesses because they protect freedom of expression: it is in their commercial interest. Arguably, one of the reasons they now face online harms legislation is that they have sometimes been TOO ready to defend freedom of expression, as for instance when in June 2020 it took Twitter 48 hours to remove anti-Semitic tweets by the grime artist Wiley, despite repeated protests and a boycott[8].

 

  1. However the United Kingdom’s record on press freedom is not as strong as some appear to believe. Section 40 of the Crime and Courts Act, the coercive legislation intended to force the press into state-imposed regulation, remains in statute, and the Press Recognition Panel continues to operate.

 

  1. Neither the original Online Harms White Paper nor the government’s official consultation response even mention Article 10 ECHR, with which all legislation is supposed to be compatible. True, the consultation acknowledges the concerns that have been raised, with 44 references to freedom of expression, compared to just nine in the original White Paper. But there seems to be a tacit recognition that on online harms will curtail freedom of expression. For instance:

 

Alongside tackling harmful content this legislation will protect freedom of expression and uphold media freedom. Companies will be required to have accessible and effective complaints mechanisms so that users can object if they feel their content has been removed unfairly.[9]

 

  1. In other words, the default position will be that the platforms will take down content which is deemed legal but harmful. The public – and journalists, if we do not obtain the complete news publisher exemption we have asked for - will only be able to exercise the right to freedom of expression supposedly guaranteed under Article 10 by using a complaints procedure. How many of our hypothetical Mumsnet users will have the times and resources to do that?

 

  1. The online platforms are not philanthropies – they are money-making machines. For the last 20 years the pursuit of profit has made them supporters of freedom of expression. But if they are confronted by a regime enforced by penalties so draconian their profits could be seriously threatened, commercial imperatives will throw their operations into reverse, and they will set their algorithms as cautiously as possible. Our strong preference would be that the harmful content the government seeks to ban is clearly defined in law, and the police and the courts given the resources to deal with it.

 

  1. There is a further problem. While Parliament, the government and Ofcom have a direct obligation under Article 10 to protect freedom of expression, Google and Facebook are private companies and therefore do not have the same obligation. Google’s decision on January 5 to remove TalkRadio from YouTube (which it owns) provides stark evidence of the damage to freedom of expression this could cause.

 

  1. Google initially justified its action (which was later reversed) by saying: ‘We quickly remove flagged content that violate our Community Guidelines, including COVID-19 content that explicitly contradict expert consensus from local health authorities or the World Health Organization (WHO).’[10] In other words, the removal was a direct forerunner for online harms legislation, under which a prohibition on ‘legal but harmful’ content could mean legitimate journalism is silenced because it does not conform to government policy.

 

  1. It also foreshadowed online harms in, we understand, being an automated execution of a ‘three strikes and you are out’ rule imposed by Google and managed from the USA. The two previous strikes, both automated, had not been communicated to TalkRadio – as with algorithm changes and digital advertising procedures Google’s decision-making was arbitrary and secret.

 

  1. The removal also took place despite TalkRadio being an Ofcom-regulated broadcaster and the government having given repeated assurances – of which Google, with its vast PR and lobbying machine, must be aware - that legitimate news publishers would not be in scope of online harms legislation.

 

  1. Google’s cavalier disregard for freedom of expression on this occasion is a timely warning of what will very likely become routine censorship under the online harms regime. It is vital that online harms legislation places online platforms under a binding legal obligation to preserve freedom of expression when drawing up and enforcing their codes of practice. The penalties for failing to do so should be as serious as those for allowing access to harmful content.

 

 

What model of legal liability for content is most appropriate for online platforms? (Q.5)

 

  1. The growth of the internet, and of the online platforms, is founded on section 230 of the US 1996 Communications Decency Act. In essence, this means that online platforms do not carry legal liability for content posted on their services by third parties, unless it contravenes criminal law.

 

  1. In Europe, including for now the UK, this is mirrored by the E-Commerce Directive, which creates a ‘liability shield’ for ‘hosting providers’ i.e. platforms featuring user-generated content. Those platforms are not liable for the information uploaded to them by users, provided that:

 

 

 

 

  1. The upshot is that, currently, online platforms are not treated like publishers – they are not responsible for checking the information they host on behalf of users and cannot be held liable for it unless they have been made aware of its illegality and have then failed to remove it. This is the approach taken in section 5 of the Defamation Act 2013, which provides a mechanism for online platforms to avoid liability for defamation if they follow certain procedures and hand over details of those posting the words complained of. Unfortunately, the process is so cumbersome and time sensitive that it is not used in practice.

 

  1. The Online Harms bill would represent a step away from this liability model, by placing an active duty on online platforms to protect users from harm, under threat of very significant fines. The EU is currently debating similar reforms and its proposed Digital Services Act would also increase the responsibilities on online platforms, with fines of up to 6% of global revenue for serious breaches.

 

  1. Section 230 and its international equivalents have come under much criticism in recent years for allowing the online platforms to avoid almost all responsibility for the content they carry, some of which is undeniably harmful. The platforms themselves have seriously weakened section 230 by making editorial decisions, such as Twitter’s action in permanently banning Donald Trump, which has been perceived by many as a political gesture.[11]

 

  1. DMG Media firmly believes the platforms should carry full responsibility for compliance with criminal law, and if necessary criminal law should be tightened to ensure it addresses all seriously harmful content.

 

  1. However there are very serious dangers to freedom of expression if the online liability shield is removed altogether. At present it is news publishers, not the online platforms, which are liable for their content under defamation law. We, and other news publishers, vet our content carefully to minimise risk of defamation actions. However defamation law (and, increasingly, data protection law) is often used by the rich and powerful to intimidate publishers and stifle debate. Sometimes, when publishing investigations concerning wealthy and controversial subjects, news publishers knowingly take a degree of risk, believing it is in the public interest to do so and trusting they will be vindicated in the courts.

 

  1. If the liability shield is removed, the subjects of such investigations will have another target: the platform which is distributing the content. Unlike the news publisher, the platform will have no knowledge of the steps taken to validate the story, nor of the evidence which has been assembled but not published. Nor will a platform have any interest in seeing someone else’s journalism vindicated. Instead it will make a very superficial commercial decision – what can it do to minimise the cost and resources that would go into fighting a legal challenge. The answer inevitably will be to take the story down, and probably to give an undertaking not to allow publication of similar allegations in the future. Worse still, once a platform has established that certain individuals are likely to sue, it will probably reset its algorithms to ensure no critical stories about them ever appear on its services again.

 

  1. This would be seriously chilling for investigative journalism and, if algorithms are reset, might mean that platforms would not surface critical news stories about certain individuals even if they are the result of events covered by absolute privilege, such as Parliamentary debates or court proceedings. We strongly believe news publishers must retain full responsibility for what they publish, even when it is distributed by the platforms. For that reason we do not believe the liability shield should be removed, at least as far as civil law is concerned.

 

 

To what extent should users be allowed anonymity online? (Q.6)

 

  1. There can be no doubt that one of the concerns driving online harms legislation is the readiness of some individuals to post false and abusive comments about others, hiding behind the anonymity allowed by the platforms to escape any consequences.

 

  1. This gives malevolent people an opportunity they have never had before – previously such individuals could shout abuse in the street, but it would be obvious who was doing the shouting long before they felt a policeman’s hand on their collar. In contrast, members of the public who are victims of the online equivalent often struggle to deal with it.

 

  1. Currently, if a member of the public wants to take action against content posted anonymously online, the first step is to notify the hosting platform and request they remove it. The likelihood of this happening depends very much on the nature of the content and whether it is clearly unlawful.

 

  1. If you want to take civil action against the individual it is extremely unlikely that the host platform will voluntarily hand over their details, unless you utilise section 5 of the Defamation Act which, for the reasons described above, is not straightforward. You must go to court and apply for what is known as a ‘Norwich Pharmacal’ order, by showing that you have a good arguable case that a wrong has been committed against you, and that you need information held by the host platform in order to seek redress against the wrong-doer. Platforms will provide the user’s details when served with the order. However, there is no guarantee that those details will be genuine. You could be provided with a false name and an IP address. You will then need to seek an order against the internet service provider (ISP) to get the details they hold. Ultimately, if the individual is using a virtual private network (VPN) or proxy server, even the ISP may not be able to identify them. The practical difficulties presented by anonymity are obvious. Of course, if the content is criminal, the police have much greater powers to identify the perpetrator.

 

  1. There is therefore a strong case for abolishing online anonymity. After all, one of the main constraints on responsible journalism is the knowledge, as an editor, that if you publish false allegations or unfounded abuse you will be held to account. Legitimate news publications have named editors, published business addresses and assets in the UK. They carry legal liability for everything they publish, and can be sued. They can also be subjected to public calumny, whether through criticism in other media, or more formally through debate in Parliament or other public forums, or being hauled before select committees. Why should the same not apply to individuals who hide behind anonymity to make peoples’ lives hell on the internet?

 

  1. There are, however, also strong arguments that anonymity helps to preserve freedom of expression online. Individuals may fear reprisals if they publish in their own name. This is obviously the case in repressive societies where criminal or political action may be taken – as for instance by the Saudi government against Jamal Khashoggi[12] or the Chinese government against the citizen journalist Zhang Zhan, who revealed the true impact of Covid in Wuhan.[13] However it is also becoming increasingly dangerous to express unfashionable opinions in democratic societies[14], as was discovered by Maya Forstater, who lost her job as a tax researcher, and her subsequent industrial tribunal case, when she tweeted ‘men cannot change into women’[15], and Eton teacher Will Knowland, who was sacked for refusing to remove a lecture on gender issues from YouTube[16].

 

  1. It is true that protections exist for whistleblowers, but these only apply where the disclosure is made to an appropriate person, i.e. their employer, a regulator, or legal advisors. It would not protect disclosures made publicly online.

 

  1. Stripping people who post material online of anonymity, or at least making it very much easier for those they might harm to discover their true identity, would force them to take legal and moral responsibility for what they post, in the way professional journalists have to. The committee might think this would go a long way towards removing abuse and harm from the internet, without all the threats to freedom of expression posed by a system of state-enforced commercial censorship, which is essentially what online harms legislation proposes.

 

  1. However if that was to happen the individual’s right to freedom of expression would need much greater protection under British law than it has at present. Unfortunately Article 10 is not as strong as the US First Amendment, nor is it respected in the same way by the government and the legal system.  If online anonymity was to be removed, the right to freedom of expression would have to be guaranteed in a way that protects individuals not only against legal action, but from all the other potential consequences of expressing an opinion, such as loss of their job.

 

 

How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role? (Q.9)

 

  1. Currently there is no transparency of algorithms whatsoever. All publishers have experience of algorithms being changed, and sometimes causing very serious commercial damage, without warning, explanation or means of redress. We recorded in our submission to the Committee’s consultation on the Future of Journalism how in June 2019 Google introduced an algorithm change which cut MailOnline search visibility by 50pc, while improving that of most of our competitors. Three months later the change was reversed, also without warning or explanation.

 

  1. We suspect on that occasion Google’s motive may have been commercial – MailOnline had been particularly successful at utilising header bidding to promote non-Google ad demand, which delivers more revenue to publishers than that sourced through Google, which uses its market power to depress prices paid to publishers. The algorithm change coincided with the introduction by Google of its ‘Unified Pricing’ rules, which appear to have been intended to prevent header bidding.

 

  1. However there may also be political motives, whether conscious and overt, or the result of unconscious bias. Google search consistently promotes the content of left-leaning news publishers, such as the Guardian and the BBC, against conservative-leaning sites such as MailOnline. This was particularly notable during the Brexit debate. This was a subject in which Mail readers were keenly interested, and MailOnline published a very large amount of content, which many on both sides of the debate credited with playing a key role in the outcome. Yet Google’s algorithms overwhelmingly preferred Guardian and BBC content.

 

  1. There has also been evidence during the Covid pandemic that Google and other platforms have been promoting the official line of governments and health authorities at the expense of other points of view[17]. This goes further than search results. The most notable example was Google’s decision to remove TalkRadio from YouTube, as discussed in paragraphs 38-42, for ‘including COVID-19 content that explicitly contradict expert consensus from local health authorities or the World Health Organization (WHO).’ Within 24 hours the decision was reversed. But TalkRadio is owned by News UK, and Michael Gove raised questions about the threat of freedom of expression. How many smaller websites are likely to be silenced because they publish content which challenges official orthodoxy?

 

  1. Google is not alone in exploiting the secrecy of algorithms to skew them for reasons no one can question. We are aware other publishers have problems with Facebook. We believe the apparent readiness of the platforms to use the secrecy of algorithms to covertly further their own commercial and/or political aims is toxic for democracy, and well as hugely damaging for the success of UK digital businesses.

 

  1. We have made this argument in some detail to the Competitions and Markets Authority, and were pleased to see that it has been addressed in the Advice of the Digital Markets Taskforce, which recommends the Digital Markets Unit (DMU) should have powers to enforce transparency of algorithms, including by the imposition of interim measures if necessary.[18] 

 

  1. The Digital Markets Unit has yet to be set up, or to formulate its code of practice. Much more work has been done by the Australian Competition and Consumer Commission (ACCC). The ACCC’s main concern so far has been institute a mandatory arbitration process to force platforms to pay for news contents they use. In doing so it has had to address algorithms, in order to prevent platforms using them to direct users away from Australian news publishers, which the platforms will have to pay, to English-language news publishers overseas, which the platforms could use without paying. Under the ACCC draft Code[19] the platforms would have been required:

 

 

 

presenting content there is no discrimination between news publishers registered to take part in the payment for content scheme, nor discrimination between registered publishers and those not registered to take part in the scheme.[20]

 

  1. Regrettably, no doubt as a consequence of lobbying by the platforms, these recommendations have been diluted in the bill which has been laid before the Australian Parliament. The current version will still be a big improvement on the present situation in the UK, but we would strongly recommend the DMU models its code of practice in this area on the ACCC’s original recommendation.

 

 

To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation? (Q.11)

 

  1. This is another topic on which we have given extensive evidence to the CMA. At present Google and Facebook operate effective monopolies in the markets they dominate: search, digital advertising and social media. The news publishing industry is pluralistic, which is not only highly desirable to maintain freedom of expression, but a statutory requirement. There is therefore a complete imbalance of power between the platforms and news publishers. Contracts are imposed on a take-it-or-leave-it-basis, operating policies and practices are changed arbitrarily, often without any warning or explanation. Smaller publishers frequently complain they are unable even to speak to anyone at the platforms.

 

  1. The Digital Markets Taskforce has set out in detail how it proposes the Digital Markets Unit should impose a pro-competitive code of practice on the digital platforms[21]. It has our full support.

 

 

Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration? (Q.14)

 

  1. There has been enormous progress since the launch of the Cairncross Review in March 2018. As a global digital publishing company, with editorial and commercial operations in the USA and Australia as well as the UK, DMG Media works with the competition authorities in all three jurisdictions, and has also worked with the EU Commission.

 

  1. Unsurprisingly, different governments have different priorities, and work to different schedules, but the digital platforms are global businesses, and create the same problems in every jurisdiction, so many of the solutions will be the same.

 

  1. It is widely recognised that the commercial problems facing news publishers are a serious threat to freedom of expression, because they threaten the funding of reliable journalism, and the work of the CMA on digital advertising is much admired around the world. We know from our own discussions that it has informed similar work being done by the Department of Justice and States Attorneys-General in the US and the ACCC in Australia.

 

  1. The US authorities are using legal measures to enforce competition remedies on the digital platforms, because that is how competition policy is enforced in the US. We will have to wait to see whether the Biden administration pursues this as vigorously as the Trump administration has, but on this issue there is no great divergence between the political parties, so we are optimistic. The ACCC is likely to recommend a regulatory approach more akin to the Digital Markets Unit later this year.

 

  1. The other major area of policy initiatives is making the internet a safer place for the public. In this, Britain’s online harms legislation is much further advanced than anything currently being considered elsewhere. However just before Christmas the EU Commission published two pieces of proposed legislation, the Digital Markets Act and the Digital Services Act. The former is intended to address the problems to be dealt with by the Digital Markets Unit in the UK; the latter is aimed at online harms. We have not yet had an opportunity to study these in detail. We are not at present aware of any similar proposals in Australia or the US.

 

Conclusion

 

  1. We cannot stress too strongly the dangers to freedom of expression inherent in the proposed online harms legislation. We fully accept there is some content online which is in breach of criminal law, and those responsible should be prosecuted. We also fully accept individuals should have access to civil remedies where online content has caused them personal damage, such as through libel or invasion of privacy, and support any measures to make those remedies more accessible to ordinary people.

 

  1. What is not compatible with freedom of expression is legislation which requires and empowers commercial monopolies to suppress content because it does not comply with fashionable thinking, or someone deems it offensive, or it challenges government policy. Our free press will no longer be free if it is subject to such a system of control, and for that reason it must be exempt, both with regard to the content it publishes on its own websites, and to that content when it is distributed by online platforms.

 

  1. But that is only part of the argument – freedom of expression is the right of the ordinary citizen, just as much as of the professional journalist. The Committee should consider very carefully whether the online harms apparatus proposed by the government is fundamentally compatible with freedom of expression. ‘Safeguards’ will make little difference if the whole system is built around the premise that subjects of perfectly legitimate interest cannot be discussed unless that discussion complies with norms dictated by a government body (Ofcom) or a commercial monopoly (an online platform).

 

  1. We would conclude by saying there is one piece of legislation which as relevant to freedom of expression in the digital age as it was in the year it was adopted, 1791 - the First Amendment to the US constitution. It runs to just 45 words, which are worth quoting here:

 

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

 

  1. We respectfully suggest that, whatever recommendations the Committee makes about freedom of expression online, it considers whether it is not time the Mother of Parliaments made a similarly unequivocal commitment to free speech.

 

 

January 2021

17

 


[1]              https://www.ipso.co.uk/editors-code-of-practice/

[2]              Ibid

[3]              https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/

[4]              https://www.dailymail.co.uk/news/article-9066069/Woke-folk-beware-Freedom-speech-includes-right-offend-say-judges-landmark-ruling.html

[5]              https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response para 22

[6]              https://assets.publishing.service.gov.uk/media/5efb22fbd3bf7f768fdcdfae/Appendix_S_-_the_relationship_between_large_digital_platforms_and_publishers.pdf p.S6

[7]              https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-responseIbid para 23

[8]              https://www.bbc.co.uk/news/technology-53553573

[9]              https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response Joint Ministerial Foreward

[10]              https://www.dailymail.co.uk/news/article-9114039/Covid-UK-YouTube-shuts-TalkRadios-channel-presenters-challenge-lockdown-policy.html

[11]              https://www.dailymail.co.uk/debate/article-9136627/STEPHEN-GLOVER-Twitter-bans-Donald-Trump-voice-terrorists.html

[12]              https://www.bbc.co.uk/news/world-europe-55060088

[13]              https://www.dailymail.co.uk/news/article-9092217/Chinese-citizen-journalist-37-reported-Wuhans-coronavirus-outbreak-jailed-4-years.html

[14] 

[15]              https://www.theguardian.com/society/2019/dec/18/judge-rules-against-charity-worker-who-lost-job-over-transgender-tweets

[16]              https://www.dailymail.co.uk/news/article-9030625/Eton-teacher-sacked-free-speech-row-pictured-attends-appeal-hearing.html

[17]              https://www.dailymail.co.uk/news/article-8092547/NHS-coronavirus-guidance-Google-searches.html

[18]              https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/ Digital_Taskforce_-_Advice_--.pdf p.17, 50

[19]              https://www.accc.gov.au/system/files/Exposure%20Draft%20Bill%20-%20TREASURY%20LAWS%20AMENDENT%20%28NEWS%20MEDIA%20AND%20DIGITAL%20PLATFORMS%20MANDATORY%20BARGAINING%20CODE%29%20BILL%202020.pdf p.11-13

[20]              Ibid p.17

[21]              https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/ Digital_Taskforce_-_Advice_--.pdf