House of Lords Select Committee on Communications and Digital
inquiry into Freedom of Expression Online
Committee questions we address:
We argue that, today, the protection of freedom of speech has less to do with designing the right kind of online platform to promote good digital citizenship than with a government’s willingness to tax such platforms, for reasons we explain below.
To begin, freedom of speech only becomes a problem when that speech reaches someone who doesn’t want to hear it. What matters most at that point are the power dynamics that either block or amplify the speech, as it tries to reach its unreceptive audience. Yet, online platforms are mostly reluctant to directly promote or eliminate that speech, since it results in bad publicity.
Enter the recommendation algorithm, which can create personalized “filter bubbles” that protect our eyes and ears from speech we don’t want to hear, and feeds us the stuff we do want to hear. But the algorithm is also designed to make us spend more time on the platform, which means that in order to do its job, it needs to detect trends and suggest an endless array of new content for us to consume. Recently, platforms have discovered that people spend more time online when algorithms suggest content that features voices they do want to hear talking about stuff they don’t want to hear. In other words, baiting users under the guise of freedom of speech can be monetized, because it results in users spending more time online being outraged by what they don’t want to hear.
The recommendation algorithm, then, has incredible power to both constrict and advance freedom of speech, power its creators have bestowed on it to wield as necessary in order to maximize profits (e.g., more time spent on the platform). To “fix” this situation (assuming it’s something corporations would want to fix), it has been proposed that we should tweak the algorithm. Maybe make it more prudent about what it promotes or downvotes, more judicious about creating and bursting those filter bubbles. In our opinion, however, as long as the quest for profit remains the driving force in this scenario, tweaking the algorithms will not help us. If, as the proposed Online Harms Bill states, the goal is “to make the UK the safest place in the world to go online,” then the government needs to recognize that the biggest threat to that safety are the corporations that design the platforms and algorithms.
More socially responsible corporate taxation can help change that. According to a report by Fortune magazine, Amazon, Apple, Facebook, Google, Microsoft, and Netflix used various tax avoidance strategies (sanctioned by governments) to pay $155.3 billion less than what they would otherwise have to pay in taxes between 2010 and 2019, across all their global markets (December 6, 2019).
What does this have to do with the protection of online speech? Two things.
First, less taxes means more profits, which translates into more powerful companies that face less competition. With less competition, these platforms can act as stronger monopsonies (single “buyers” of what we produce), and their algorithms can have a larger impact on freedom of speech. A company like Facebook can be used to set the tone for scandals or elections, as we have seen.
More broadly, less taxes means fewer opportunities for the wealth that citizens generate to benefit them in any way. Put simply, governments need to stop favoring corporations over people by making sure the former pay the taxes they owe, especially in the case of online platforms where users and their data constitute the “raw materials” that generate wealth for the corporations.
Which bring us at last to education, and the role it can play in encouraging digital citizenship. Yes, tweaking platform algorithms can make some difference in protecting online freedom of speech, as can new governance models and regulations for platforms. But spending money collected from taxes on education, reversing decades of cuts, can make an even bigger difference. If governments complain that there’s no money for public education, they should tax corporations. After all, education is about forming citizens who know what to do when confronted with speech they don’t want to hear. That is: assess, evaluate, rethink, change one’s mind, or develop stronger convictions. No algorithm can possibly teach that. If the UK really wants to know “how the right to freedom of expression should be protected online,” they should look at increasing funding for education by taxing Big Tech.
Ulises A. Mejias and Nick Couldry are authors of The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism (2019, Stanford University Press).
 State University of New York at Oswego
 London School of Economics