Written evidence submitted by EFL, Kick It Out, The Football Association, The Premier League (OSB0007)

 

INTRODUCTION

 

These challenging times have underlined the importance and passion for football across our nation. We witnessed the highly anticipated return of the Premier League and EFL in the summer of 2020, and this was followed by the opportunity for fans to return to stadiums with the recent UEFA EURO 2020 Championships. The summer tournament brought a renewed sense of optimism and joy to the UK after a difficult 18 months, with the country rallying behind the players and the England team reaching their first final in a major tournament for decades. This sense of hope and cheer was brought to a shuddering halt after the final whistle, when three young England players – who had bravely stepped up for their country – were subjected to sickening racist and discriminatory abuse online. Sadly, for our sport, this was nothing new.

 

Football, from the grassroots to the elite, is impacted by vile online hate and discrimination. The victims are not only football players, but also their families, referees, coaches, administrators, pundits, fans and others in the game. The abuse is not virtual: it is real. It is directed at real people who are real victims. We are shocked and saddened at the impact this has on each and every victim, their families - and indeed wider society.

 

Some commentators suggest that football players and others in the game should simply shut their social media accounts if they don’t like the abuse. We reject this argument wholeheartedly. These individuals are victims and should not be hounded off the great communication media of our times because of the colour of their skin, their gender, their sexual orientation or any other protected characteristic. Putting the onus on victims to address the problem is wrong. Social media companies must create a safe space for all their users.

 

Similarly, the key solution proposed by the social media companies thus far has been to educate football players about leveraging online blocking techniques. In our view, this is akin to a pub landlord asking his or her customers to cover their ears while someone is yelling vile abuse at them rather than asking the abuser to leave the premises and banning them from entering again.

 

Sadly, this is reflective of the lack of progress made in tackling online discriminatory abuse in football. If the social media companies are unwilling or unable to get their house in order, we believe that the time has come for the Government to regulate the industry. The football authorities very much welcome the Online Safety Bill and are keen to work hand-in-glove with the Government and parliamentarians on its contents.

 

ONLINE DISCRIMINATORY ABUSE IN FOOTBALL

 

As the nation’s number one team sport with 18 million fans, 14 million participants and over 100,000 grassroots teams, football has an incredible power to bring people together, pull down barriers and act as a force for good.

 

However, the football authorities have been concerned for some time now about the rising levels of online discriminatory abuse being directed at footballers and others in the game. The language used is debasing, and often threatening and illegal. Similarly, emojis and memes are used to peddle legal but harmful messages. These written and pictorial messages cause distress to the recipients and the vast majority of people who abhor discrimination of any kind.

 

 

The football authorities have been in discussions with Twitter, Facebook and other social media

companies for several years now. However, sadly, there is no real sign of significant proactive

change that addresses the problem. It is proving very difficult to ensure that social media companies prevent or take down offensive content before it is seen, that online abusers are prevented from deleting and re-registering accounts, or that authorities have sufficient information and evidence to take prosecutions forward. Individuals can abuse others online anonymously, which means that online hate and discriminatory abuse have no real-world consequences for perpetrators or social media companies.

 

In February 2021, the football authorities sent an open letter to Twitter and Facebook, imploring them to bring a halt to online discriminatory abuse, and asking them to implement four steps for change:

 

1.                                   Messages and posts should be filtered and blocked before being sent or posted if they contain racist or discriminatory material;

2.                                   They should operate robust, transparent, and swift measures to take down abusive material if it does get into circulation;

3.                                   All users should be subject to an improved verification process that (only if required by law enforcement) allows for accurate identification of the person behind the account. Steps should also be taken to stop a user that has sent abuse previously from re-registering an account; and

4.                                   Their platforms should actively and expeditiously assist the investigating authorities in identifying the originators of illegal discriminatory material.

 

Unfortunately, little progress has been made.

 

In order to highlight our cause and push for change, the football authorities – alongside a host of other sports, organisations and individuals – carried out a boycott of social media over the first May Bank Holiday weekend in 2021:

https://www.thefa.com/news/2021/may/12/update-on-social-media-boycott-20210512

 

 

THE WIDER ROLE BEING TAKEN ON BY THE FOOTBALL AUTHORITIES

 

Aside from the actions and interventions outlined above, the football authorities have also implemented a number of other measures and commitments to ensure that we are doing what we can, in a coordinated way, to tackle this issue.

 

These measures have included:

 

 

expanded across the professional game in 2021.This system has enabled the football authorities to work with players, managers and their families who receive discriminatory abuse and to share this information and evidence with the police, Crown Prosecution Service (in the UK) and other relevant authorities internationally to take legal action wherever possible. This is not the responsibility of football but has been a role that inadequate protection has forced us to adopt.

 

Football has collectively pulled together to drive action and has been proactive about striving to tackle online abuse. But, fundamentally, only a robust Online Safety Bill and genuine commitment from social media companies will eradicate this problem.

 

LACK OF ACTION

 

Social media companies have vast technological skills and financial resources at their disposal. It is our contention that if they genuinely wanted to resolve the issue of online discriminatory abuse, they have the capacity to do so.

 

Indeed, they have used their technology and resources to address challenges in comparable areas. For example, they have invested for many years in machine learning technologies to enable them to identify and remove material infringing copyright so that it is taken down in minutes. The difference between protecting customers’ copyright and protecting users from discrimination is that there is money involved in copyright protection. There is no money involved in tackling discrimination.

 

We believe that this is simply a question of priority - not resources or capability. Unfortunately, this has all come at the expense of creating a safe space for some users, particularly those who might be vulnerable or who should be protected from discrimination under the Equality Act 2010.

 

Social media companies benefit from an exemption from liability for content on their platforms, and we believe that this is the root cause of the problem of online harms. This originated with the Communications Decency Act 1996 in the USA, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. This position was actively created to support an industry in its infancy. There are similar provisions in other countries. The impact is that that social media companies are, in effect, not liable for anything published via their media - unlike mainstream and traditional media. The landscape has now changed, and the industry has matured, but the legislation and protection has not matured. Given the volume of users and the mechanisms in place to interact with strangers, the duty to protect users should now be paramount and take priority.

 

We note that Facebook now has 2.8 billion active users, Instagram has 1.4 billion users and Twitter has 400 million1 users, and Facebook’s average advertising revenue per user in 2020 was over $32. 2 This is clearly no longer a fledgling industry. As stated above, given the turnover and revenues involved, these organisations have the financial and human resources to proactively tackle discriminatory abuse if they choose to do so.


Disappointingly, social media companies have not taken up the mantle, and the result has been that the policing of online hate is largely left to local law enforcement agencies, who lack the resources to monitor and investigate offences effectively. Consequently, this has facilitated and enabled a lawless battleground of anonymous and faceless hate, exacerbated by very few examples of successful prosecutions - and clearly not enough to act as a deterrent. It has also fed a culture of impunity, emboldening “keyboard warriors” who can say and do whatever they like - confident in the knowledge that the prospects of any real-world consequences are remote.

 

This is not freedom of speech - this is online anarchy. In the real world, freedom of speech has never been an absolute right; it has always been qualified by, for example, criminal and libel laws. However, in the online context, hate speech can currently be expressed anonymously, with limited ability to identify perpetrators and hold them accountable for their actions.

 

This lack of accountability and lack of effective sanctions creates a dangerous culture where discriminatory and hateful abuse has become normalised behaviour and moves from the online world to the real world. As we emerge from lockdowns and tentatively return to our previous activities, we have already heard anecdotal evidence this football season of an increase in such abuse at grassroots games, as well as some incidents of homophobic chanting and booing of anti- racist protests at professional matches.

 

USER IDENTIFICATION

 

We are particularly keen to address some of the points made by the social media companies in relation to anonymity and pseudonymity. Ultimately, the football authorities want perpetrators of online discrimination and abuse to be identified so that appropriate enforcement action can be taken, and appropriate sanctions imposed. There are a number of ways in which this could be done, including identity verification, layering of user permissions, cooling off periods, escalating identification requirements and the use of default settings. However, the strength in these mechanisms is not in them being explored in isolation, but instead in combination. Nothing should be off the table and the focus should be on the clear objective of discouraging and sanctioning discriminatory abuse and creating an online environment that is safe for all.

 

We believe that it is necessary to address two counterarguments head-on:

 

1.                        Social media companies will often present the user identification issue as a stark binary choice between identifying everyone or identifying nobody. In reality, there are a huge variety of options between those two extremes. There could be a sliding scale requirement for identification verification, and this could be linked to levels of access in relation to platform features and interactions with unrelated users. For example, anecdotally, we understand that a lot of abuse is created by new or “burner” or “respawn” accounts – and so the permissions granted to such accounts could be limited until identification information is provided or until the expiry of a “cooling off” period, during which interactions are subject to heightened monitoring for abuse.

 

In addition, social media platforms claim that their community standards are more stringent than the requirements of the criminal law. That means that they should be breached at a lower threshold of misbehaviour than the criminal law requires. That lower threshold could be used as a trigger when there is an attempt to create a discriminatory post, to require identification information to be entered and limit user permissions. We are clear that the verification data should only be shared with others if required by law enforcement to identify a person behind an account, so that there is a level of protection in place for the minority of users who need to maintain some anonymity online. We believe that the current system favours the protection of a  minority of users who require anonymity/pseudonymity, rather than the vast majority. The consequence is that the abusers are afforded more protection than the victims. We need to ask ourselves what is more important - the abuser’s right to freedom of speech, or the victim’s right to freedom from harassment and discrimination.

 

2.                        We are aware of a recent data set from Twitter which suggested that 99% of abusive account owners are identifiable, and so user identification would not have helped. We are sceptical of this data set and would need to see more details. This data related to one professional game in the 2002-2021 football season – a season that had over 2,000 professional games. We believe that policy should be built on a completer and more robust dataset than one game. This data also conflicts with data found by other sources across football and indeed, data shared by law enforcement. What this volatility highlights is the pressing need for robust and dynamic data transparency that goes beyond the surface incident data to root causes – in order to enable more relevant granular solutions. This should be a key function of Ofcom.

 

 

THE ONLINE SAFETY BILL

 

The football authorities welcome the Online Safety Bill and would like to see the legislation expedited and on the statute books as soon as possible. We are keen to work hand-in-glove with the Government and parliamentarians on this important legislation in the coming months.

 

We believe that the success of the legislation rests on Ofcom having sufficient resources, technological understanding and powers to hold the social media companies to account. This appears to be supported in the current draft of the Bill, which provides Ofcom with an adequate

statutory basis to exercise its regulatory duties, grants it appropriate powers to obtain funding, and gives it the ability to enforce sanctions. We are calling for all these points to be maintained in the Bill as it progresses. We would also like Ofcom to be given the powers to require social media platforms to mitigate the sharing, spreading and amplification of harmful content through design features.

 

RECOMMENDED CHANGES TO THE ONLINE SAFETY BILL

 

The football authorities believe that the draft Online Safety Bill can be strengthened in certain areas, and we are keen to work with the Government and parliamentarians to ensure these proposals are thoroughly considered:

 

1.                        Protection for groups identified as at risk under existing legislation:

 

Under the auspices of the Equality Act 2010, Parliament has afforded certain groups statutory protection from discrimination. The Act defines a set of personal characteristics and creates protections in certain activities – for instance, in work, education, as a consumer or when using public services. We are calling on the Government to ensure that the same protection is afforded to these groups online. In other words, the Online Safety Bill should give effect to Parliament’s evident intent in the Equality Act 2010.

 

2.                        Ofcom should be given powers on ‘legal but harmful’ content:

 

Most online abuse is in the grey area of legal but harmful – for example, the content might not be squarely racist, but it is still sufficiently triggering to be harmful and damaging. Where discriminatory abuse online is illegal, it is under-reported and under-enforced. At present, the Online Safety Bill appears to give Ofcom enforcement powers on illegal harms - but on legal harms (i.e. "content that is harmful to adults") there is no duty of care and no tools for Ofcom to    intervene. Rather, companies are expected to write Terms and Conditions which are unenforced by a regulator (s.46) and which we have seen are repeatedly insufficient - as is the current situation with the community guidelines that many social media platforms already have in place. A reasonable threshold of harm should also be set at a level that is no less than the definition currently used by Ofcom in the Communications Act 2003. In setting this threshold, a useful test could be to review the abuse after the Euro 2020 Final - noting the collective condemnation by the UK public and truly questioning whether the existing proposed Bill would have captured each and every piece of abuse. The public reaction to the abuse experienced by the players suggests that the Bill should encompass these instances and Ofcom should have enforcement powers to intervene.

 

3.                        The reach of anonymous accounts should be part of the Codes of Practice:

 

One of our most significant concerns with social media is the lack of real-world consequences for online hate. “Keyboard warriors” can say what they want, largely with total impunity. We believe that more needs to be done in the Online Safety Bill to help identify perpetrators.

 

Social media companies can verify their users’ identities, and indeed choose to do so when verifying the authenticity of the accounts of public figures. Some form of verification could be required for all accounts, with verification data only shared if required by law enforcement. Verification should not be binary and a simple on/off choice, but could and should use a sliding scale, with the minimum level of verification being offered the lowest level of engagement and reach by default.

 

Limiting the reach of those providing partial verification data could be done by making it a requirement under Risk Assessment (s.7) and again under “Codes of practice” (s.29). Alternatively, it could be included in Ofcom's power to "identify characteristics of different kinds of regulated services that are relevant to such risks of harm and assess the impact of those kinds of characteristics on such risks” (s.61). Here, Ofcom could provide advice on how regulated services should seek to tackle the harm caused by anonymous accounts, including ensuring that repeat offenders are unable to access and utilise platform services by registering additional accounts.

 

4.                        Discrimination and hate speech should be the subject of specific codes of practice:

 

Discrimination and hate speech should be the subject of specific codes of practice (as is the case for child sexual exploitation and terrorism content) – in order to signify the seriousness of this abuse, and to clarify the standards and best practice that social media platforms will be expected to adopt (s.29). Codes of practice should also clearly set out the minimum standards of moderation that relate to the nature, size and reach of the operator.

 

5.                        Discrimination and hate speech should be ‘priority illegal content’:

 

Discrimination and hate speech should be categorised as “priority illegal content” in the Online Safety Bill in order to put an increased obligation on service providers to take positive actions to minimise the presence of such content on their platforms (s.5(2) and s.9).

 

6.                        Obligation on providers to manage the risks of content which is harmful to adults:

 

We believe that there should be a specific obligation on providers to specify in their terms of services how they will mitigate and manage the risks of content which is harmful to adults (including racism and/or hate speech) - mirroring the obligation in the current draft Bill in

 

relation to services and children (s.5(5) and s.11). This obligation should also require operators to optimise the use of algorithms so that the system itself is managed, not the content it produces. This positive obligation should also specifically look at how operators deal with the amplification of abuse, not just one singular abusive message. The legislation must recognise and ensure that protections can be extended with the increased risk of harm, acknowledging that those with large public profiles may be more vulnerable to abuse.

 

7.                        The Secretary of State should have enforcement powers:

 

We believe that there should be a statutory power to enable the Secretary of State for Digital, Culture, Media and Sport to clearly specify content that is harmful to adults in secondary legislation (by way of an open list, rather than closed, and including discriminatory abuse and hate speech), as well as mandatory minimum steps that should be taken to "deal with" such harmful content (s.5(5) and s.11).

 

8.                        Comments on news publishers’ platforms should be included within the scope of the Bill:

 

Exemptions in relation to news publisher-operated social media platforms and comments made in response to newspaper articles create a loophole whereby discrimination and hate speech are left out of the scope of the Bill - and leave an online environment where discriminatory comments and emojis can continue to be published, normalised and shared. We believe that this exemption should be removed, at least in relation to discrimination and hate speech (s.39).

 

9.                        Transparency reporting requirements should be defined by the Bill:

 

We would like increased certainty in the Online Safety Bill in relation to the transparency reports that services will be required to submit and/or publish. We note that, in comparison , the Modern Slavery Act (MSA) sets out six suggested categories of information. The past five years have shown that even those proposed (but not mandatory) categories have been insufficient to encourage robust reporting. The Online Safety Bill should be specific in setting out compulsory minimum levels and categories of information that would be more consistent with the planned reforms to the MSA (s.49). In addition, the Bill should define and give Ofcom the power to share reporting information with third parties in instances where there may be perceived or implied risk to a group of users to enable effective policy change. Reporting needs should drive more relevant and granular policy interventions. For example - how effective would a football banning order for online abuse be if the data transparency reporting showed that 70% of abusers are overseas and never attend English football games? We’re clear that would require a different policy intervention.

 

10.                    Social media companies should be required to assist the authorities with their criminal investigations:

 

Part of the safety duties in relation to illegal content (s.9) should be to ensure that illegal content is always passed expeditiously to the appropriate authorities. This should set the parameters by which the provider co-operates with the Government, regulatory bodies and law enforcement and inform the level of reporting required in the transparency reporting (s.49).

 

STRENGTHENING THE PROSECUTION SYSTEM

 

Finally, whilst we believe that the Online Safety Bill can support an effective framework to ensure that social media companies create a safe space, we have concerns that the lack of effective prosecution needs to be addressed too. There are gaps in the existing UK prosecutorial system that need to be plugged.

 

Discriminatory material which is posted online should be thoroughly and expeditiously dealt with by the appropriate authorities.

 

To do this, we recommend that the Home Office should consider including football as a specific priority in a well-resourced Hate Crime Unit. Local police forces across the country and the Crown Prosecution Service would work hand-in-hand with football, the public authorities and social media companies to provide a proactive, joined up approach to address online discrimination against any protected characteristics as specified under the Equality Act 2010.

 

This would send a strong message about the intentions and direction of public policy, resource allocation and priorities in addressing this important issue - with the national game helping to change attitudes and behaviours across the whole of society.

 

We hope that this is an area of work the Committee can also recommend that the Government         should pursue.

 

 

September 2021

 


1 https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/

2 https://www.statista.com/statistics/234056/facebooks-average-advertising-revenue-per-user/