POS0007

 

Written evidence submitted by Dr Mihaela Popa-Wyatt - The University of Manchester

All views contained within this submission are attributable to the authors and do not necessarily reflect University of Manchester. This evidence submission is supported by Policy@Manchester.

Preparedness for online safety regulation

Speech is a public act and it therefore has public costs. Speech that is widely transmitted is much closer in many ways to publication, where there are limits placed on speech.

While current law attempts to regulate acts of hate speech through the criminal courts, there is so much generally toxic speech online that this is infeasible. The sheer volume of speech that harms can't be regulated by the criminal courts, because the approach doesn't scale.

While there is a large body of speech that already breaches the criminal law, there is a much larger body of toxic speech that is clearly individually harmful or harmful to society more broadly, and which should not be widely retransmitted. This includes utterances such as non-specific death threats, expression of a desire that a person be raped or fall ill with cancer and acts of collective bullying and intimidation. There are many well-known cases, which detail the harms.

My contention is that it is the scale (volume, transmissibility and accessibility) of toxic speech, rather than its mere existence, which is the problem online speech regulation needs to address.

Policy proposal to enhance preparedness and effectiveness of regulation

My research indicates we should regulate the platforms rather than the individuals who speak, except in cases clearly breaching free speech protections, for example incitement to violence. This regulation of the publisher of public speech is the direction in which the Online Safety Act has begun to move.

The core of the policy proposal is that the entity which profits from toxic speech is polluting the public space and so should pay for that pollution through a tax burden that is proportional to the extent of the transmission of toxic speech.

The simple aim is to incentivise platforms to reduce the extent of retransmission of toxic speech and to provide funds to "clean up" the consequences of that pollution. This is analogous to taxes on tobacco and alcohol. In particular, a tax on revenue or profit that is proportional to the aggregate quantity of harmful speech disseminated by providers (online platforms, internet service providers) should be levied (as per pillar one of the Digital Services Tax, introduced in April 2020). This proposal does not replace the use of the criminal law, which should still be available for the prosecution of extreme acts, such as incitement to violence.

How to operationalize a tax-based solution

One analogy to draw on is the use of third party auditors to certify company accounts. Another is the use of third parties to inspect and certify that a company reaches a particular cross-industry standard. A specific operationalization might be sketched as follows:

A standards organization produces a standard for measuring the publication and reach of speech made online. Legislators pass laws to require social media platforms to present, as part of their annual returns, measurements of the extent and grading of toxic speech published on their platforms. The measurement standards should separately, include all kinds of toxic speech, such as hate speech, mis-information and disinformation.

 

Certified, third party auditors are paid by social media platforms to audit their output. The standards do not necessarily require the measurement to be a complete record of every speech act. The standards could require statistically sound methods for sample-based auditing, much as financial audits do not necessarily look at every single financial transaction. The standards should allow third party auditors to operationalize measurement through automated methods: AI-based toxic speech detection and grading to match human assessments is now technically feasible. The certified audits would be presented to the tax authorities and would also be used by accountants to determine a tax burden according to regulation.

An essential element to ensure rigorous measurement is that third party assessors would be hired and paid by the social media platforms, but regulated by and legally responsible to the tax authorities. Penalties for non-adherence to the speech measurement equivalent of accounting principles would be similar to those applied to participation in tax fraud, for example. The regulator or tax authority would have the power to levy heavy fines on auditors who deviated from the measurement standards.

One question is whether this tax-based proposal is likely to hinder individual freedom of speech. The argument is that it does not because platforms are not obliged to ban speech, merely to reduce its spread. It is the affective content of speech that so often causes most harm. Legitimate expression of opinion would explicitly be excluded from any definition of toxic speech.

October 2023