DMG Media—written evidence (FEO0055)
House of Lords Communications and Digital Committee call for evidence on freedom of expression online
- This submission is made on behalf of DMG Media, publishers of the MailOnline, metro.co.uk and inews websites, and the Daily Mail, Mail on Sunday, Metro and ‘i’ newspapers.
- As news publishers, freedom of expression is the air we breathe, central to everything we do in supplying the public with reliable, fact-checked, high quality news. It goes without saying that free speech and reliable news provide the bedrock of any democratic society.
- However free speech and reliable news are not synonymous. Indeed very often they are antipathetic, because there is no freedom of expression if it is limited to the facts and opinions that someone else decrees are accurate, and worthy of being given a voice.
- It is also the case that everyone is a liberal when they see freedom of expression under threat in a foreign country – far fewer are liberals when it comes to defending freedom of expression in their own country, particularly when it is their own beliefs and assumptions which that freedom has been used to criticise.
- Our company has been one of the major news publishers in the UK since before the founding of the Daily Mail in 1896, and has always fought to preserve freedom of expression.
- Ironically the greatest threat to freedom of expression in recent times, as far as UK news publishers are concerned, came with the Leveson Inquiry, in the aftermath of which otherwise liberal opinion was mobilised to attempt to impose on the British press a system of regulation which, however disguised, would have put in place all the basic elements of state control.
- We have always accepted, and supported, the rule of law. But law is clearly defined, and although it places some very limited restrictions on freedom of expression, it is underpinned by the presumption that all forms of expression which are not proscribed are legal and free.
- We require all our journalists to follow the Editors’ Code of Practice. Again, this places some restrictions on journalists, but at the same time explicitly protects ‘the fundamental right to freedom of expression – such as to inform, to be partisan, to challenge, shock, be satirical and to entertain’.
- The Code also recognises that facts may be disputed and there may be more than one version of the truth. It therefore does not impose an absolute test of accuracy, but requires journalists to ‘take care not to publish inaccurate, misleading or distorted information’ [Our emphasis]. The Code does not proscribe any subjects which cannot be discussed, or versions of the truth which must be adhered to.
- What is not compatible with freedom of expression is a regime which gives a regulatory body the power to determine that facts or opinions which are otherwise legal, should nevertheless not be published, and that publication should be prevented by the threat of serious penalties.
- We appreciate there are those who would argue that this is necessary for the good of society – but that is the argument used by the Chinese government. This is why we have such deep misgivings about the government’s proposed online harms legislation, which would place online platforms under a duty of care, underpinned by draconian penalties, not to publish content which is perfectly legal, but still judged harmful. Who will make those judgments? And how long before the definition of harm is extended to anything which is perceived as challenging the orthodoxies – or the government – of the day?
- For all these reasons we strongly believe online harms legislation is incompatible with the freedom of expression enjoyed by the British press, whether in print or online, since newspaper licensing was abandoned more than 300 years ago. It is vital that legitimate online news content is exempt from this legislation, not only when it is published on news publishers’ own websites, but also when it is distributed by third parties online, including by platforms which are otherwise in scope.
- This is the context in which we attempt to answer the questions posed in the committee’s call for evidence. We do not pretend of have expert knowledge of all issues surrounding freedom of expression, and limit our response to those questions where we believe our experience as news publishers might be helpful.
- These are the key points we set out below:
- There are numerous provisions under both criminal and civil law to deal with harmful activity online. The problem is police lack the resources to prosecute, and ordinary members of the public cannot afford to use cumbersome civil law remedies.
- But the alternative offered by online harms regulation raises very serious risks for freedom of expression in general, and robust, questioning journalism in particular. It is vital that journalism produced by legitimate news publishers is exempt from online harms legislation, both when it is presented on publishers’ own news websites and when it is distributed by online platforms.
- It is also vital that online platforms are placed under a clear legal obligation to protect freedom of expression when drawing up and enforcing online harms codes of practice, with stringent penalties if they do not do so.
- We strongly believe that as far as news content is concerned, legal liability should remain with the publisher which creates it, not with online platforms which distribute it.
- There may be a case for ending online anonymity, but only if the individual’s right to freedom of expression is given much stronger protection under British law.
- There is an urgent need for transparency of algorithms, and to ensure news publishers are given warning and explanation of any changes to them made by platforms.
How should good digital citizenship be promoted? How can education help? (Q.2)
- There is no doubt good work that can be done in educating the public to recognise online content that is malevolent or fraudulent. But extreme care will have to be taken that this does not become politicised, and turn into a system for reinforcing fashionable orthodoxies, at the expense of freedom of expression.
- Sadly the area of modern British life where freedom of expression is currently under greatest threat is our universities, where academics are regularly denounced for expressing the wrong views, and speakers de-platformed. University media faculties, with a few noble exceptions, tend to be staffed by individuals with strong left-wing views, sometimes coloured by antipathy towards former rivals in journalism. The public should be the judge of what news content is worth reading, not professors of journalism.
- The same caution must be applied to attempts to ‘rate’ content. There have been numerous projects in this area, none of which has gained wide acceptance. Most appear to be based on American journalistic conventions, and have a strong tendency to rate left-leaning titles above right-leaning ones. We would be happy to provide more information on this if the Committee is interested.
Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated? (Q.3)
- There are numerous existing provisions that address unacceptable online user generated content, subject to the overarching balance of Article 10 ( the right of freedom of expression ) and Article 8 ( right to privacy) of the European Convention on Human Rights (ECHR), as enshrined in the Human Rights Act:
- Civil provisions:
- Data Protection (including the “right to be forgotten”, which can require webpages and search results to be amended/deleted);
- Misuse of private information;
- Protection from Harassment Act 1997, section 2.
- Criminal provisions:
- Malicious Communications Act 1988, section 1;
- Protection from Harassment Act 1997, section 2;
- Communications Act 2003, section 127;
- Making a threat to kill, contrary to section 16 Offences Against the Person Act 1861;
- Making a threat to commit criminal damage, contrary to section 2 Criminal Damage Act 1971;
- Controlling or coercive behaviour, contrary to section 76 Serious Crime Act 2015;
- Blackmail, contrary to section 21 Theft Act 1968;
- Juror misconduct, contrary to sections 20A-G Juries Act 1974;
- Contempt of court, contrary to the Contempt of Court Act 1981;
- Publishing material which may lead to the identification of a complainant of a sexual offence, contrary to section 5 Sexual Offences (Amendment) Act 1992;
- Intimidating a witness or juror, contrary to section 51 Criminal Justice and Public Order Act 1994;
- Breach of automatic or discretionary reporting restrictions, contrary to section 49 Children and Young Persons Act 1933 and section 45 Youth Justice and Criminal Evidence Act 1999;
- Breach of a restraining order, contrary to section 5 Protection from Harassment Act 1997;
- Disclosing private sexual images without consent (“revenge pornography”), contrary to section 33 Criminal Justice and Courts Act 2015;
- Causing sexual activity without consent, or causing or inciting a child to engage in sexual activity, or sexual communication with a child contrary to sections 4, 8, 13, 15A Sexual Offences Act 2003;
- Taking, distributing, possessing or publishing indecent photographs of a child, contrary to section 1 Protection of Children Act 1978.
- However the committee is right to ask whether the law is adequately enforced. The answer is that it is not, and the fact that there is strong political pressure for online harms legislation demonstrates that it is not, certainly as far as criminal law is concerned. The three criminal provisions that deal most directly with online harms are:
- Malicious Communications Act 1988, s.1 – the Act criminalises grossly offensive messages, threats or false information that are sent with the intention of causing distress or anxiety;
- Protection from Harassment Act 1997, s.2 – the Act criminalises causing alarm or distress by harassment;
- Communications Act 2003, s.127 – the Act makes it an offence to send a message that is grossly offensive or of an indecent, obscene or menacing character over a public electronic communications network
- The committee will be aware that the Law Commission recently held a consultation on the reform of the law relating to communications offences. This was concerned with offences committed by individuals rather than the media. However we are concerned that without a complete and robust exemption for journalism, the reforms suggested by the Law Commission could have a serious chilling effect, particularly on investigative journalism. We would be happy to share our response to the Law Commission with the committee if it is interested.
- That aside, the problem with current law is that it is easy for people with money to instruct lawyers to tackle those on the internet who post content which is damaging or threatening to them. It is different for people without money, whose only recourse is the police. The police do not have the resources to deal with the huge scale of online content and the illegal conduct that often occurs. Harassment (for instance) exists as both a criminal and civil cause of action. Very often, people contact the police about online harassment, only be told by police they not have the time and the resources to deal it. More resources and better training for the police would be at least one answer to this problem.
- Another might be to provide the public with free – or very cheap - access to civil law. The online platforms are awash with money. They could be required to fund an arbitration scheme that would give individuals the means to take action against people who torment them on the internet. This might be greatly preferable to requiring the platforms to use the algorithms to censor vast swathes of content, much of which may turn out not to be harmful at all.
- If something is manifestly harmful then it either is, or should be, illegal. The proposed regulation of content that is “legal but harmful” could very easily give rise to lists of prohibitions driven by censoriousness, moralising, or hypersensitivity. What content is “harmful”? Who would decide that, and by which means? Would it be confined to extreme points of view such as the anti-vax movement and QAnon? Even then, why should be people who believe in such things – however irrational – be prohibited from expressing that belief?
- Asking private companies to police individuals’ communications (including via private channels) and to remove content deemed ‘lawful but harmful’ has clear ramifications for freedom of expression. The threat of huge fines may encourage companies to ‘over-censor’ i.e. to err on the side of caution and remove borderline content. This could lead to opinions which are perceived to be controversial being silenced. It is claimed that ‘harmful’ will be further defined in secondary legislation, but it is unclear how this could be done with anything approaching legal certainty.
- To give a hypothetical example, there could come a moment when an anti-vaxxer sounds a genuine warning. There are enormous vested interests in the success of Covid vaccines – governments, pharmaceutical companies, the medical establishment, and the public who long for an end to the disease - and they have been approved in record time. What if certain side effects have not been spotted, or have been ignored? If that were to happen, at present the evidence would very likely begin to appear in posts on user-generated sites like Mumsnet. These are monitored by journalists who, detecting a pattern of adverse outcomes, would begin asking questions of the manufacturers and the medical authorities. But under the online harms regime online platforms may well face very heavy penalties if they surface content which contradicts the government line that the vaccine is safe, and therefore set their algorithms block any such content. And even if there was no blanket ban, how would an algorithm detect the difference between an anti-vaxxer with a bogus message, and a member of the public with a genuine concern, very likely not expressed in precise medical language?
- There would also be serious Article 10 issues in prohibiting ‘legal but harmful’ content in relation to the current debate of trans issues. Only last month the Court of Appeal overturned the conviction under the 2003 Communications Act of Kate Scottow, who had referred to trans woman Stephanie Hayden as a man and ‘a pig in a wig’. In their ruling the judges said: ‘the freedom only to speak inoffensively is not worth having’.
- We have been making these points to the government ever since the Online Harms White Paper was first published. We welcome the fact that the Government’s response to the White Paper consultation makes it clear that news websites will not be in scope under the legislation:
Content published by a news publisher on its own site (e.g. on a newspaper or broadcaster’s website) will not be in scope of the regulatory framework and user comments on that content will be exempted.
- But this does not go far enough. Across the industry just under 45pc of news publishers’ traffic is generated directly by their own websites. More than half is referred by online platforms, and therefore would be in scope and subject to algorithmic censorship. That is not compatible with freedom of expression, or Article 10 of the ECHR.
- To be fair the government response recognises this and says ‘legislation will include robust protections for journalistic content shared on in-scope services’. However we strongly believe ‘robust protection’ is not sufficient, and have engaged in discussion with the DCMS on how the news publisher exemption can be extended to include legitimate news content when it is distributed by platforms which are in scope. In collaboration with the News Media Association we have presented to the DCMS detailed proposals for how this could be achieved. We would be very happy to share these with the committee if it is interested.
- But that still leaves the problem of journalists’ source material, much of which these days is first published as user-generated content, whether it is in the form of so-called citizen journalism, or is simply members of the public sharing experiences which have concerned them. As we hope the hypothetical anti-vaxxer example above demonstrates, we fear it will be impossible to achieve this in a way which is compatible with freedom of expression in an open and democratic society.
Should online platforms be under a legal duty to protect freedom of expression? (Q.4)
- This question presupposes that online platforms do not do this. In fact they have tended to protect freedom of expression as a default position because they are US companies and – understandably –see everything through the prism of the First Amendment, which is far more robust, and more widely observed, than Article 10 ECHR. Further, they succeed as businesses because they protect freedom of expression: it is in their commercial interest. Arguably, one of the reasons they now face online harms legislation is that they have sometimes been TOO ready to defend freedom of expression, as for instance when in June 2020 it took Twitter 48 hours to remove anti-Semitic tweets by the grime artist Wiley, despite repeated protests and a boycott.
- However the United Kingdom’s record on press freedom is not as strong as some appear to believe. Section 40 of the Crime and Courts Act, the coercive legislation intended to force the press into state-imposed regulation, remains in statute, and the Press Recognition Panel continues to operate.
- Neither the original Online Harms White Paper nor the government’s official consultation response even mention Article 10 ECHR, with which all legislation is supposed to be compatible. True, the consultation acknowledges the concerns that have been raised, with 44 references to freedom of expression, compared to just nine in the original White Paper. But there seems to be a tacit recognition that on online harms will curtail freedom of expression. For instance:
Alongside tackling harmful content this legislation will protect freedom of expression and uphold media freedom. Companies will be required to have accessible and effective complaints mechanisms so that users can object if they feel their content has been removed unfairly.
- In other words, the default position will be that the platforms will take down content which is deemed legal but harmful. The public – and journalists, if we do not obtain the complete news publisher exemption we have asked for - will only be able to exercise the right to freedom of expression supposedly guaranteed under Article 10 by using a complaints procedure. How many of our hypothetical Mumsnet users will have the times and resources to do that?
- The online platforms are not philanthropies – they are money-making machines. For the last 20 years the pursuit of profit has made them supporters of freedom of expression. But if they are confronted by a regime enforced by penalties so draconian their profits could be seriously threatened, commercial imperatives will throw their operations into reverse, and they will set their algorithms as cautiously as possible. Our strong preference would be that the harmful content the government seeks to ban is clearly defined in law, and the police and the courts given the resources to deal with it.
- There is a further problem. While Parliament, the government and Ofcom have a direct obligation under Article 10 to protect freedom of expression, Google and Facebook are private companies and therefore do not have the same obligation. Google’s decision on January 5 to remove TalkRadio from YouTube (which it owns) provides stark evidence of the damage to freedom of expression this could cause.
- Google initially justified its action (which was later reversed) by saying: ‘We quickly remove flagged content that violate our Community Guidelines, including COVID-19 content that explicitly contradict expert consensus from local health authorities or the World Health Organization (WHO).’ In other words, the removal was a direct forerunner for online harms legislation, under which a prohibition on ‘legal but harmful’ content could mean legitimate journalism is silenced because it does not conform to government policy.
- It also foreshadowed online harms in, we understand, being an automated execution of a ‘three strikes and you are out’ rule imposed by Google and managed from the USA. The two previous strikes, both automated, had not been communicated to TalkRadio – as with algorithm changes and digital advertising procedures Google’s decision-making was arbitrary and secret.
- The removal also took place despite TalkRadio being an Ofcom-regulated broadcaster and the government having given repeated assurances – of which Google, with its vast PR and lobbying machine, must be aware - that legitimate news publishers would not be in scope of online harms legislation.
- Google’s cavalier disregard for freedom of expression on this occasion is a timely warning of what will very likely become routine censorship under the online harms regime. It is vital that online harms legislation places online platforms under a binding legal obligation to preserve freedom of expression when drawing up and enforcing their codes of practice. The penalties for failing to do so should be as serious as those for allowing access to harmful content.
What model of legal liability for content is most appropriate for online platforms? (Q.5)
- The growth of the internet, and of the online platforms, is founded on section 230 of the US 1996 Communications Decency Act. In essence, this means that online platforms do not carry legal liability for content posted on their services by third parties, unless it contravenes criminal law.
- In Europe, including for now the UK, this is mirrored by the E-Commerce Directive, which creates a ‘liability shield’ for ‘hosting providers’ i.e. platforms featuring user-generated content. Those platforms are not liable for the information uploaded to them by users, provided that:
- the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
- the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.
- The Directive also prohibits member states from imposing a ‘general monitoring obligation’ i.e. any duty requiring providers to actively check the information they transmit or store to look for any illegal content.
- The upshot is that, currently, online platforms are not treated like publishers – they are not responsible for checking the information they host on behalf of users and cannot be held liable for it unless they have been made aware of its illegality and have then failed to remove it. This is the approach taken in section 5 of the Defamation Act 2013, which provides a mechanism for online platforms to avoid liability for defamation if they follow certain procedures and hand over details of those posting the words complained of. Unfortunately, the process is so cumbersome and time sensitive that it is not used in practice.
- The Online Harms bill would represent a step away from this liability model, by placing an active duty on online platforms to protect users from harm, under threat of very significant fines. The EU is currently debating similar reforms and its proposed Digital Services Act would also increase the responsibilities on online platforms, with fines of up to 6% of global revenue for serious breaches.
- Section 230 and its international equivalents have come under much criticism in recent years for allowing the online platforms to avoid almost all responsibility for the content they carry, some of which is undeniably harmful. The platforms themselves have seriously weakened section 230 by making editorial decisions, such as Twitter’s action in permanently banning Donald Trump, which has been perceived by many as a political gesture.
- DMG Media firmly believes the platforms should carry full responsibility for compliance with criminal law, and if necessary criminal law should be tightened to ensure it addresses all seriously harmful content.
- However there are very serious dangers to freedom of expression if the online liability shield is removed altogether. At present it is news publishers, not the online platforms, which are liable for their content under defamation law. We, and other news publishers, vet our content carefully to minimise risk of defamation actions. However defamation law (and, increasingly, data protection law) is often used by the rich and powerful to intimidate publishers and stifle debate. Sometimes, when publishing investigations concerning wealthy and controversial subjects, news publishers knowingly take a degree of risk, believing it is in the public interest to do so and trusting they will be vindicated in the courts.
- If the liability shield is removed, the subjects of such investigations will have another target: the platform which is distributing the content. Unlike the news publisher, the platform will have no knowledge of the steps taken to validate the story, nor of the evidence which has been assembled but not published. Nor will a platform have any interest in seeing someone else’s journalism vindicated. Instead it will make a very superficial commercial decision – what can it do to minimise the cost and resources that would go into fighting a legal challenge. The answer inevitably will be to take the story down, and probably to give an undertaking not to allow publication of similar allegations in the future. Worse still, once a platform has established that certain individuals are likely to sue, it will probably reset its algorithms to ensure no critical stories about them ever appear on its services again.
- This would be seriously chilling for investigative journalism and, if algorithms are reset, might mean that platforms would not surface critical news stories about certain individuals even if they are the result of events covered by absolute privilege, such as Parliamentary debates or court proceedings. We strongly believe news publishers must retain full responsibility for what they publish, even when it is distributed by the platforms. For that reason we do not believe the liability shield should be removed, at least as far as civil law is concerned.
To what extent should users be allowed anonymity online? (Q.6)
- There can be no doubt that one of the concerns driving online harms legislation is the readiness of some individuals to post false and abusive comments about others, hiding behind the anonymity allowed by the platforms to escape any consequences.
- This gives malevolent people an opportunity they have never had before – previously such individuals could shout abuse in the street, but it would be obvious who was doing the shouting long before they felt a policeman’s hand on their collar. In contrast, members of the public who are victims of the online equivalent often struggle to deal with it.
- Currently, if a member of the public wants to take action against content posted anonymously online, the first step is to notify the hosting platform and request they remove it. The likelihood of this happening depends very much on the nature of the content and whether it is clearly unlawful.
- If you want to take civil action against the individual it is extremely unlikely that the host platform will voluntarily hand over their details, unless you utilise section 5 of the Defamation Act which, for the reasons described above, is not straightforward. You must go to court and apply for what is known as a ‘Norwich Pharmacal’ order, by showing that you have a good arguable case that a wrong has been committed against you, and that you need information held by the host platform in order to seek redress against the wrong-doer. Platforms will provide the user’s details when served with the order. However, there is no guarantee that those details will be genuine. You could be provided with a false name and an IP address. You will then need to seek an order against the internet service provider (ISP) to get the details they hold. Ultimately, if the individual is using a virtual private network (VPN) or proxy server, even the ISP may not be able to identify them. The practical difficulties presented by anonymity are obvious. Of course, if the content is criminal, the police have much greater powers to identify the perpetrator.
- There is therefore a strong case for abolishing online anonymity. After all, one of the main constraints on responsible journalism is the knowledge, as an editor, that if you publish false allegations or unfounded abuse you will be held to account. Legitimate news publications have named editors, published business addresses and assets in the UK. They carry legal liability for everything they publish, and can be sued. They can also be subjected to public calumny, whether through criticism in other media, or more formally through debate in Parliament or other public forums, or being hauled before select committees. Why should the same not apply to individuals who hide behind anonymity to make peoples’ lives hell on the internet?
- There are, however, also strong arguments that anonymity helps to preserve freedom of expression online. Individuals may fear reprisals if they publish in their own name. This is obviously the case in repressive societies where criminal or political action may be taken – as for instance by the Saudi government against Jamal Khashoggi or the Chinese government against the citizen journalist Zhang Zhan, who revealed the true impact of Covid in Wuhan. However it is also becoming increasingly dangerous to express unfashionable opinions in democratic societies, as was discovered by Maya Forstater, who lost her job as a tax researcher, and her subsequent industrial tribunal case, when she tweeted ‘men cannot change into women’, and Eton teacher Will Knowland, who was sacked for refusing to remove a lecture on gender issues from YouTube.
- It is true that protections exist for whistleblowers, but these only apply where the disclosure is made to an appropriate person, i.e. their employer, a regulator, or legal advisors. It would not protect disclosures made publicly online.
- Stripping people who post material online of anonymity, or at least making it very much easier for those they might harm to discover their true identity, would force them to take legal and moral responsibility for what they post, in the way professional journalists have to. The committee might think this would go a long way towards removing abuse and harm from the internet, without all the threats to freedom of expression posed by a system of state-enforced commercial censorship, which is essentially what online harms legislation proposes.
- However if that was to happen the individual’s right to freedom of expression would need much greater protection under British law than it has at present. Unfortunately Article 10 is not as strong as the US First Amendment, nor is it respected in the same way by the government and the legal system. If online anonymity was to be removed, the right to freedom of expression would have to be guaranteed in a way that protects individuals not only against legal action, but from all the other potential consequences of expressing an opinion, such as loss of their job.
How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role? (Q.9)
- Currently there is no transparency of algorithms whatsoever. All publishers have experience of algorithms being changed, and sometimes causing very serious commercial damage, without warning, explanation or means of redress. We recorded in our submission to the Committee’s consultation on the Future of Journalism how in June 2019 Google introduced an algorithm change which cut MailOnline search visibility by 50pc, while improving that of most of our competitors. Three months later the change was reversed, also without warning or explanation.
- We suspect on that occasion Google’s motive may have been commercial – MailOnline had been particularly successful at utilising header bidding to promote non-Google ad demand, which delivers more revenue to publishers than that sourced through Google, which uses its market power to depress prices paid to publishers. The algorithm change coincided with the introduction by Google of its ‘Unified Pricing’ rules, which appear to have been intended to prevent header bidding.
- However there may also be political motives, whether conscious and overt, or the result of unconscious bias. Google search consistently promotes the content of left-leaning news publishers, such as the Guardian and the BBC, against conservative-leaning sites such as MailOnline. This was particularly notable during the Brexit debate. This was a subject in which Mail readers were keenly interested, and MailOnline published a very large amount of content, which many on both sides of the debate credited with playing a key role in the outcome. Yet Google’s algorithms overwhelmingly preferred Guardian and BBC content.
- There has also been evidence during the Covid pandemic that Google and other platforms have been promoting the official line of governments and health authorities at the expense of other points of view. This goes further than search results. The most notable example was Google’s decision to remove TalkRadio from YouTube, as discussed in paragraphs 38-42, for ‘including COVID-19 content that explicitly contradict expert consensus from local health authorities or the World Health Organization (WHO).’ Within 24 hours the decision was reversed. But TalkRadio is owned by News UK, and Michael Gove raised questions about the threat of freedom of expression. How many smaller websites are likely to be silenced because they publish content which challenges official orthodoxy?
- Google is not alone in exploiting the secrecy of algorithms to skew them for reasons no one can question. We are aware other publishers have problems with Facebook. We believe the apparent readiness of the platforms to use the secrecy of algorithms to covertly further their own commercial and/or political aims is toxic for democracy, and well as hugely damaging for the success of UK digital businesses.
- We have made this argument in some detail to the Competitions and Markets Authority, and were pleased to see that it has been addressed in the Advice of the Digital Markets Taskforce, which recommends the Digital Markets Unit (DMU) should have powers to enforce transparency of algorithms, including by the imposition of interim measures if necessary.
- The Digital Markets Unit has yet to be set up, or to formulate its code of practice. Much more work has been done by the Australian Competition and Consumer Commission (ACCC). The ACCC’s main concern so far has been institute a mandatory arbitration process to force platforms to pay for news contents they use. In doing so it has had to address algorithms, in order to prevent platforms using them to direct users away from Australian news publishers, which the platforms will have to pay, to English-language news publishers overseas, which the platforms could use without paying. Under the ACCC draft Code the platforms would have been required:
- to give 28 days’ notice of any algorithm changes, or other changes of policy or practice, likely to have a significant effect on the ranking of news publishers registered under the proposed mandatory payment for content scheme (unless there is an urgent public interest for the change, in which case notice must be given within 48 hours after it takes effect);
- to ensure the notice describes the change and the effect it is likely to have, and to inform the news publisher how to minimise any negative effect the change might have;
- and to ensure that in relation to crawling, indexing, ranking, displaying or
presenting content there is no discrimination between news publishers registered to take part in the payment for content scheme, nor discrimination between registered publishers and those not registered to take part in the scheme.
- Regrettably, no doubt as a consequence of lobbying by the platforms, these recommendations have been diluted in the bill which has been laid before the Australian Parliament. The current version will still be a big improvement on the present situation in the UK, but we would strongly recommend the DMU models its code of practice in this area on the ACCC’s original recommendation.
To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation? (Q.11)
- This is another topic on which we have given extensive evidence to the CMA. At present Google and Facebook operate effective monopolies in the markets they dominate: search, digital advertising and social media. The news publishing industry is pluralistic, which is not only highly desirable to maintain freedom of expression, but a statutory requirement. There is therefore a complete imbalance of power between the platforms and news publishers. Contracts are imposed on a take-it-or-leave-it-basis, operating policies and practices are changed arbitrarily, often without any warning or explanation. Smaller publishers frequently complain they are unable even to speak to anyone at the platforms.
- The Digital Markets Taskforce has set out in detail how it proposes the Digital Markets Unit should impose a pro-competitive code of practice on the digital platforms. It has our full support.
Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration? (Q.14)
- There has been enormous progress since the launch of the Cairncross Review in March 2018. As a global digital publishing company, with editorial and commercial operations in the USA and Australia as well as the UK, DMG Media works with the competition authorities in all three jurisdictions, and has also worked with the EU Commission.
- Unsurprisingly, different governments have different priorities, and work to different schedules, but the digital platforms are global businesses, and create the same problems in every jurisdiction, so many of the solutions will be the same.
- It is widely recognised that the commercial problems facing news publishers are a serious threat to freedom of expression, because they threaten the funding of reliable journalism, and the work of the CMA on digital advertising is much admired around the world. We know from our own discussions that it has informed similar work being done by the Department of Justice and States Attorneys-General in the US and the ACCC in Australia.
- The US authorities are using legal measures to enforce competition remedies on the digital platforms, because that is how competition policy is enforced in the US. We will have to wait to see whether the Biden administration pursues this as vigorously as the Trump administration has, but on this issue there is no great divergence between the political parties, so we are optimistic. The ACCC is likely to recommend a regulatory approach more akin to the Digital Markets Unit later this year.
- The other major area of policy initiatives is making the internet a safer place for the public. In this, Britain’s online harms legislation is much further advanced than anything currently being considered elsewhere. However just before Christmas the EU Commission published two pieces of proposed legislation, the Digital Markets Act and the Digital Services Act. The former is intended to address the problems to be dealt with by the Digital Markets Unit in the UK; the latter is aimed at online harms. We have not yet had an opportunity to study these in detail. We are not at present aware of any similar proposals in Australia or the US.
- We cannot stress too strongly the dangers to freedom of expression inherent in the proposed online harms legislation. We fully accept there is some content online which is in breach of criminal law, and those responsible should be prosecuted. We also fully accept individuals should have access to civil remedies where online content has caused them personal damage, such as through libel or invasion of privacy, and support any measures to make those remedies more accessible to ordinary people.
- What is not compatible with freedom of expression is legislation which requires and empowers commercial monopolies to suppress content because it does not comply with fashionable thinking, or someone deems it offensive, or it challenges government policy. Our free press will no longer be free if it is subject to such a system of control, and for that reason it must be exempt, both with regard to the content it publishes on its own websites, and to that content when it is distributed by online platforms.
- But that is only part of the argument – freedom of expression is the right of the ordinary citizen, just as much as of the professional journalist. The Committee should consider very carefully whether the online harms apparatus proposed by the government is fundamentally compatible with freedom of expression. ‘Safeguards’ will make little difference if the whole system is built around the premise that subjects of perfectly legitimate interest cannot be discussed unless that discussion complies with norms dictated by a government body (Ofcom) or a commercial monopoly (an online platform).
- We would conclude by saying there is one piece of legislation which as relevant to freedom of expression in the digital age as it was in the year it was adopted, 1791 - the First Amendment to the US constitution. It runs to just 45 words, which are worth quoting here:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
- We respectfully suggest that, whatever recommendations the Committee makes about freedom of expression online, it considers whether it is not time the Mother of Parliaments made a similarly unequivocal commitment to free speech.
 https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response para 22
 https://assets.publishing.service.gov.uk/media/5efb22fbd3bf7f768fdcdfae/Appendix_S_-_the_relationship_between_large_digital_platforms_and_publishers.pdf p.S6
 https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-responseIbid para 23
 https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response Joint Ministerial Foreward
 https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/ Digital_Taskforce_-_Advice_--.pdf p.17, 50
 https://www.accc.gov.au/system/files/Exposure%20Draft%20Bill%20-%20TREASURY%20LAWS%20AMENDENT%20%28NEWS%20MEDIA%20AND%20DIGITAL%20PLATFORMS%20MANDATORY%20BARGAINING%20CODE%29%20BILL%202020.pdf p.11-13
 Ibid p.17
 https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/ Digital_Taskforce_-_Advice_--.pdf