Written evidence submitted by Twitter

 

 

DCMS Sub-Committee: Call for evidence on online safety and online harms

 

We believe deeply in, and advocate for, freedom of expression and open dialogue - but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up. With this in mind - we welcome the government’s focus on online safety, and this Committee’s work to consider the draft Online Safety Bill.

 

As debate around the world focuses on how to solve public policy challenges related to the technology industry, our approach to regulation and public policy issues is centered on protecting the Open Internet. We define the Open Internet as a global and singular internet that is open to all and promotes diversity, competition, and innovation.

 

We believe that the Open Internet has driven unprecedented economic, social and technological progress, and while not without significant challenges, it has also led to greater access to information and greater opportunities to speak that are now core to an open society.

 

We support smart regulation that is forward thinking, understanding that a one-size-fits all approach fails to consider the diversity of the online environment, and poses a threat to innovation. We have welcomed the opportunity to participate in the Online Safety Bill consultation process over recent years.

 

Development of the draft Bill

 

Our view is that regulatory frameworks that look at system-wide processes, as opposed to individual pieces of content, will be able to better reflect the diversity of our online environment and the challenges of scale that modern communications services involve - and we are therefore supportive of the government’s original stated commitment to this approach.

 

Similarly, we welcome Ofcom’s designation as the regulator for Online Harms. As we stated in our submission to the White Paper back in 2019, we think that  Ofcom is the most appropriate and qualified body to be designated as the independent regulatory authority.

 

We do, however, think that the Bill in its present form fails to achieve a key objective: providing clarity to UK internet users - and providers - on what speech is and is not allowed online. What’s more, as Ellen Judson, senior researcher at Demos has stated: “This Bill is a jigsaw: not only internally complex, but so much of what it means for the world relies on things that don't exist yet - secondary legislation, Codes of Practice - that can't even exist until the Bill is finalised, which makes scrutiny difficult.”

 

Fundamentally, the consequence of this approach is confusion for internet users on what speech will and will not be permitted, a significant lack of clarity for service providers, and delays on implementation as we await Ofcom or secondary legislation filling in these critical details.

 

Safe platform design

 

We believe the Bill could more effectively incentivise safe platform design in two ways.

 

Firstly, safe design does not need to only involve how content is removed. Giving people more control over the content they see is important to strike a balance between content moderation and personal choice, particularly given that there are many areas of legal speech that some people find offensive or objectionable, but also have a critical role in public debate. One long-term goal should be to empower people to have control over algorithms they interact with - and ultimately drive an ability to make our own choices between algorithms. For example, Twitter in 2018 introduced the ability to turn off our home timeline ranking algorithm, returning people to a reverse-chronological order of Tweets.

 

Secondly, we were disappointed that the work on transparency appears limited to formal Transparency Reports. At Twitter, transparency is embodied in our open APIs, our information operations archive, and our disclosures in the Twitter Transparency Center. Tens of thousands of researchers access Twitter data we have made available over the past decade via our APIs. Most recently, we have offered a dedicated Covid-19 endpoint to empower public health research, and a new academic platform to encourage cutting edge research using Twitter data. Our archive of state-linked information operations is a unique resource and offers experts, researchers and the public insight into these activities. This bill is an opportunity to set out a clearer framework for such disclosures.

 

This transparency is one of the reasons you hear more about reports featuring Twitter as core to the research methodology - we empower it. In the long term, we believe a greater openness across the industry would be invaluable in delivering the transparency and accountability we all want to see. What’s more, transparency requirements appear limited to individual companies in the Bill. Further openness from public bodies and government - such as CTIRU or the Cross-Whitehall Counter Disinformation Unit - about the requests they are sending to technology companies would help build our collective understanding of, and trust in, the overall ecosystem.

 

Omissions to the draft Bill

 

There are a number of areas of the Bill where we feel more detail would be beneficial.

 

Firstly, we need far more clarity on powers and enforcement penalties, such as ‘business disruption measures.’ People around the world have been blocked from accessing Twitter and other services by multiple governments under the false guise of ‘online safety,’ impeding peoples’ rights to access information online. The government should be mindful of setting a precedent - if the UK wants to lead the online debate globally, it must also set the highest standards of clarity, transparency and due process in its own legislation.

 

As the Carnegie UK Trust have set out: “To meet the UK’s international commitments on free speech, there should be a separation of powers between the Executive and a communications regulator. The draft Bill takes too many powers for the Secretary of State. These should be reduced, removing in particular the Secretary of State’s power to direct Ofcom to modify its codes of practice to bring them in line with government policy.”

 

Clear guardrails must be put in place, and full assessments of potential unintended consequences should be undertaken before regulatory action is pursued. This should also include consideration of the impact on smaller companies of decisions and the resources required to comply with regulations.

 

Secondly, a properly resourced and prioritised digital literacy policy is also essential. While the government’s recently announced Media Literacy Strategy is very welcome, it does not go far enough - and there must be greater integration with the objectives of the Online Safety Bill.

 

Tensions in the draft Bill

 

First of all, we would welcome further work on reconciling the objectives of the Online Safety Bill with the government’s aim of promoting fair competition in the technology sector. We support the government’s work to better coordinate digital policy, including through the Plan for Digital Regulation. Competition, however, is absolutely critical for our industry to thrive; we believe that the Open Internet is at risk of being less open as it becomes less competitive and people have less choice. Globally, we are urging regulators to factor into their decisions a test of whether proposed measures, such as those in the Online Safety Bill, further enhance the dominance of existing players; or set insurmountable compliance barriers and costs for smaller companies and new market entrants.

 

Secondly, the proposed exemption for ‘content of democratic importance’ in particular requires far greater clarity; and the proposed carveout for journalists, though well-intentioned, may have unintended consequences. As the Bill is currently drafted, it introduces uncertainty through the inclusion of these categories of ‘protected’ speech without defining them in any detail. Leaving it to secondary legislation and Ofcom to resolve these issues means critical decisions about what speech is permitted and protected are not made by Parliament through primary legislation - undermining democratic oversight and accountability on key issues of free expression. For instance, would this create a loophole that people suspended from Twitter would be able to challenge their suspension if they ran for election or established a political party?

 

Journalism is the lifeblood of Twitter - we believe in it, we advocate for it, and we seek to protect it. What’s more, we recognise that sometimes it may be in the public interest to allow people to view Tweets that would otherwise be taken down, and have developed policies and processes accordingly. The challenge with translating this to regulation is the absence of a clear definition of what constitutes ‘journalistic content.’ Every day we see Tweets with screenshots of newspaper front pages, links to blogs, updates from journalists and firsthand accounts of developing events. Crucially, there are accounts we have suspended for breaking our rules who have described themselves as ‘journalists.’ Similarly, we have previously seen examples where journalistic content has included visible links to terrorist material, such as that produced by ISIS. Indeed, after the Christchurch mosque shootings, a number of news organisations broadcast the attacker’s videos in full. Is the expectation that services should not remove this content? The lack of detail around these provisions risks significant confusion and potentially undermines the overall objectives of the Bill.

 

If Parliament wishes to establish a category of content based on democratic importance or journalistic content, it seems only right that Parliament should define what that is. Without doing so, it risks confusion not just for news publishers and for services like ours, but for the people using them. These are vital questions - and ones that cannot be avoided or passed to a regulator, or private services, to resolve - not least because of potential ramifications beyond this legislation, and broader issues impacting the freedom of the press.

 

Next steps

 

As the Bill proceeds with pre-legislative scrutiny, we are calling for:

 

        Clarity in the Bill of exactly what ‘legal but harmful’ content is expected to be removed or protected. This should include clear, robust definitions for what constitutes ‘content of democratic importance’ and ‘journalistic content’ that provides the public, service providers and the regulator clear guidance on Parliament’s intent;

        A clear framework to ensure that the competition of the online ecosystem is not damaged and barriers to entry for new services are not insurmountable; 

        Consideration of a wider range of platform design interventions to deliver online safety, such as greater control over and choice between algorithms and the importance of open standards;

        A sustained role for the public to engage in the development of this regulation, especially young people - including through social media itself. Public feedback processes we have run, for instance, have provided vital input on our policies from a broad range of stakeholders - a standalone user’s perspective expressed in a Tweet is just as valid as a formal submission. This year, nearly 49,000 people from around the globe took time to share their feedback on how content from world leaders, for instance, should be handled on our service.

 

We have already shared with government concerns about both the expertise required for, and the technical feasibility of, some of these proposals. This will be even more important as the Bill is finalised and the regulation comes into force.

 

We believe that it is critical during the pre-legislative process that independent technology specialists can advise on the technical feasibility of proposed solutions. Reconciling well-intended objectives with both practical challenges and policy tradeoffs is especially key within the realms of technology law and policy, as we have seen with previous legislation. Leveraging strong technical expertise within the process can help more rapidly resolve some of these tensions, while avoiding implementation problems later down the line - and protecting against vendors over-selling the potential and feasibility of their products and services, while embedding proprietary standards and tools.