Dr Jennifer Cobbewritten evidence (DAD0074)

 

  1. This submission, based on my research on the potential regulation of online content personalisation[1], responds to question 2 in the Committee’s call for evidence.

 

Background

 

  1. The past two decades has seen the emergence and growth of digital platforms[2], positioning themselves at the centre of ‘multi-sided markets’[3]; intermediaries between individuals, businesses, organisations, governments, politicians, the public, and so on. In law, they are typically considered to be providers of information society services[4] (‘service providers’[5]). Primarily, social platforms derive their revenue from advertising; they are, in effect, advertising companies. User behaviour is tracked and data is gathered and analysed so as to predict future user behaviours and interests, personalise services, facilitate behaviourally-targeted advertising, and grow user engagement, revenue, and market position. The design and deployment of algorithms on social platforms must be understood in the context of the business models of those platforms (variously characterised as being part of the ‘surveillance economy’[6] or the ‘attention economy’[7]).

 

  1. The function of algorithmic content delivery (whether through feeds of content, such as Facebook’s News Feed or Twitter’s timeline, or through recommendations or other forms of algorithmic delivery) is to keep users engaged with the platform. ‘Engagement’ can come in various forms – viewing content, uploading content, scrolling through feeds of content, and so on. Engagement is itself pursued to increase the value of advertising by growing and maintaining the audience for that advertising. Algorithmic personalisation therefore plays an indirect but key role in the development and maintenance of social platforms’ revenue streams.

 

  1. At a technical level, there are currently two main approaches to algorithmic personalisation, both using recommender systems: content-based filtering and collaborative filtering[8]. Both approaches often involve the use of machine learning, which produces statistical models trained on (usually large) datasets and can spot correlations and patterns from which to make predictions and draw inferences[9]. Content-based filtering systems recommend content based on similarity to content previously consumed by the user (for example, ‘picture X has a similar title to previously viewed pictures Y and Z’). Collaborative filtering systems recommend content based on what similar users have consumed (for example, ‘people A, B, and C like this; a similar person D might also like this’). Both involve filtering content so as to show to users only that which is determined by the platform to be relevant, appropriate, interesting, and so on. Some platforms make use of hybrid approaches, combining features of both methods of filtering[10].

 

  1. Algorithmic content personalisation using recommender systems plays a central role across the most popular websites and platforms on the internet[11], for the primary benefit of platforms themselves. For example, on Google, recommender systems are used for various purposes, including to personalise search results to show links that bring revenue to Google. On sites like YouTube, Facebook, Reddit, Twitter, or Instagram, recommender systems provide a personalised feed of content for each user so as to keep them engaging with that platform and drive ad revenue. Netflix uses recommender systems to present a personalised library of film and TV to users as well as recommendations for further viewing so as to keep users watching and subscribing. Amazon uses a recommender system to respond to predicted user desires in order to induce them to buy products from Amazon rather than elsewhere.

 

  1. While the use of recommender systems is often touted as a means of personalising services so as to benefit users, it is ultimately done to serve the interests of the platforms themselves – to encourage users to stay engaged with the platform in question (whether to consume content, to provide content, to make a purchase, and so on), to bring revenue, and to build market position. One study of the use of recommender systems shows that they are often intended to ‘hook’ people, leading to the conclusion that they are essentially employed as ‘traps’[12].  Other work produced by YouTube[13], Netflix[14], and Yahoo and the website Etsy[15] confirms the view that algorithmic personalisation is fundamentally about engagement in pursuit of profit. As a result of this prioritisation, around 80 per cent of Netflix viewing hours come through their recommender systems; around 20 percent from the search function[16]. Around 70 per cent of YouTube video views similarly comes from its recommender systems[17].

 

  1. Understood in this way, the use of algorithms by social platforms is not so much about personalisation, as about showing people the content that the platform predicts will result in the greatest engagement. That is to say, rather than showing people what they want to see, algorithmic personalisation shows people what the platform wants them to see. The effect of widespread recommending of content by social platforms is that much online space is constructed by algorithmic systems. While, of course, television and the mass media have long contributed to collective understanding of the world, this was never as personalised, never as involved in mediating communications between individuals, and never as embedded so deeply in constructing the everyday reality of millions of people.

 

How have the design of algorithms used by social media platforms shaped democratic debate?

 

  1. Social platforms’ content delivery algorithms are by design prioritised to disseminate content that increases engagement in pursuit of revenue and, ultimately, market position (indeed, algorithmic personalisation plays major role in the dominance of platforms like Facebook[18]). The result of the prioritisation of engagement in the design of content personalisation algorithms is that online spaces become distorted. Though these algorithms are content-neutral in theory – in that they typically don’t make any value judgement about the content itself – the prioritisation of engagement means that, in practice, they prioritise content that is shocking, controversial, or extreme. This content therefore becomes disseminated to a wider audience, while more mundane content is given less priority. A hypothetical analogy is that if the offline world was constructed by algorithms prioritising engagement, and those algorithms realised that people are prone to rubber-necking, then everything would be car crashes.

 

  1. When it comes to the impact of recommending on democratic debate, it’s important to understand that, in many cases, content isn’t itself a problem. If a conspiracy theory video is seen by ten people, for example, then that is unlikely in and of itself to be a public policy issue. However, if the audience for that content is artificially amplified by disseminating it to millions of people, and if that video is systematically placed alongside other, similar content also promoting the same conspiracy theory, then that potentially is a public policy issue. Audience and context are therefore arguably more important issues than individual items of content themselves. Algorithmic content personalisation using recommender systems provides potentially problematic content with both of these things.

 

  1. Academic research[19] and journalistic investigations[20] have repeatedly produced evidence indicating that the prioritisation of engagement metrics (such as views, likes, and shares) over others has resulted in recommending playing a major role in the spread of material promoting violent extremism, disinformation, and conspiracy theories[21]. This can work in several ways. For instance, material or an extreme or controversial nature may be systematically promoted alongside other content of a similar nature, and recommended to users that may otherwise not have viewed it. Alternatively, content that debunks conspiracy theories or challenges extremism can become drowned out by content promoting those things. For example, earlier in 2019, a pro-vaccination charity was forced to leave YouTube because anti-vaccine conspiracy theory videos were repeatedly being promoted alongside their own content[22]. Indeed, it appears that seeding ‘counter-messages’ (content intended to counter disinformation, conspiracy theories, and so on) may merely result in the very content they are intended to ‘counter’ being recommended alongside them, thereby potentially increasing its exposure[23]

 

  1. There is some evidence that algorithmic personalisation on social platforms contributes to an increase in the polarisation and fragmentation of the content seen by users[24] (although this does not appear to be the case with personalisation on news media websites[25]). It is not clear to what extent this is a result of the operation of the platforms’ algorithms themselves or as a result of natural human psychological biases. Evidence suggests that the polarising and fragmenting effect of social platforms may be a combination of the design of personalisation algorithms and these human biases. However, the extent of the influence of each factor is unclear, and there is also evidence of polarisation and fragmentation on platforms that do not use algorithmic personalisation. It is also unclear whether increasingly polarised content results in increased polarisation of users’ political views, or contributes to increased polarisation of public debate more generally.

 

  1. The metrics and rankings underpinning algorithmic personalisation can often be easily manipulated by automated accounts (i.e. bots). By artificially inflating these metrics – such as views, likes, shares, and so on – networks of bots working together (‘botnets’) can artificially inflate the content’s apparent ability to keep users engaged, thereby leading the recommender system to disseminate that content more widely. This is usually undertaken for commercial or political purposes[26]. The widespread use of bots in relation to political content, in particular, has been repeatedly observed across electoral cycles in multiple countries[27].

 

To what extent should there be greater accountability for the design of these algorithms?

 

  1. Several legal frameworks are relevant in this area. E-Commerce Directive[28] provides qualified protection from liability for providers of information society services, including social platforms (note that the availability of liability protections for service providers who use algorithmic personalisation is contested[29]). Due to the significant quantity of personal data processed in algorithmic personalisation, the General Data Protection Regulation (GDPR) will also apply. The recent Platform-to-Business Regulation[30] will be relevant where platforms are involved in promoting goods and services to users. However, none of these frameworks place requirements on platforms for the design of their recommender systems in relation to the content they disseminate that would address the impact of algorithmic personalisation on democratic debate. The control over the flow of information provided by algorithmic personalisation not only plays a significant role in the profit-making practices of platforms, but also gives those platforms significant influence over online discussion[31]. These benefits for platforms should come with responsibilities and requirements.

 

  1. My work has proposed several principles that should underpin regulation for the responsible use of recommender systems by social platforms[32]. To briefly summarise, platforms should, at minimum, have responsibilities to reduce the quantity of algorithmically disseminated content that is considered to be harmful to public debate in some way (for example, far right extremism, conspiracy theories, and so on), but is not itself illegal. Down-ranking content in this way would not require social platforms to remove such content. However, it could go a significant way towards reducing the impact of that material by making it less likely that it reaches a larger audience and is placed alongside other, similar content. In fulfilment of this responsibility, platforms should keep records of what material has been promoted and make information about algorithmically-disseminated content available so as to help inform users and facilitate oversight. Additionally, algorithmic personalisation by recommender systems should be opt-in, users should be able to exercise a minimum level of control over personalisation, and opting-out again should be easy. Oversight should take a collaborative, dialogue-based form rather than a zero-tolerance approach, recognising that compliance is difficult due to the volume of information processed by service providers.

7

 


[1] Jennifer Cobbe and Jatinder Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ [https://ssrn.com/abstract=3371830].

[2] Nick Srnicek (2016) Platform Capitalism, Polity Press.

[3] Patrick Barwise and Leo Watkins (2018) ‘The evolution of digital dominance: how and why we got to GAFA’, In Martin Moore and Damian Tambini (eds.) Digital Dominance: The Power of Google, Amazon, Facebook, and Apple, Oxford University Press [http://lbsresearch.london.edu/914/]..

[4] Directive 98/34/EC of the European Parliament and of the Council of 22 June 1998 laying down a procedure for the provision of information in the field of technical standards and regulations (Official Journal L 204 , 21/07/1998 P. 0037 – 0048) (‘Technical Standards and Regulations Directive’), art.1 (as amended by Directive 98/48/EC of the European Parliament and of the Council of 20 July 1998 amending Directive 98/34/EC laying down a procedure for the provision of information in the field of technical standards and regulations (Official Journal L 217 , 05/08/1998 P. 0018 – 0026)); Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Official Journal L 178 , 17/07/2000 P. 0001 – 0016) (‘E-Commerce Directive’), recital 18.

[5] E-Commerce Directive, article 2(b).

[6] Shoshana Zuboff (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Profile Books.

[7] Tim Wu (2016) The Attention Merchants: The Epic Scramble to Get Inside Our Heads, Penguin Random House.

[8] Jesus Bobadilla, Fernando Ortega, Antonio Hernando, and Abraham Gutiérrez (2013) 'Recommender systems survey', Knowledge-Based Systems, 46, pp.109-132 [https://www.sciencedirect.com/science/article/abs/pii/S0950705113001044].

[9] For a legally-accessible discussion of machine learning, see David Lehr and Paul Ohm (2017) ‘Playing with the Data: What Legal Scholars Should Learn About Machine Learning’ 51 U.C. Davis Law Review

[10] For example, Netflix (Carlos A Gomez-Uribe and Neil Hunt (2015) ‘The Netflix Recommender System: Algorithms, Business Value, and Innovation’, ACM Transactions on Management Information Systems, 6(4) [https://dl.acm.org/citation.cfm?id=2843948].

[11] For an overview, see Jennifer Cobbe and Jatinder Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ [https://ssrn.com/abstract=3371830], pp.9-11.

[12] Nick Seaver (2018) ‘Captivating algorithms: recommender systems as traps’, Journal of Material Culture.

[13] YouTube Creator Blog (2012) YouTube Now: Why We Focus on Watch Time [https://youtube-creators.googleblog.com/2012/08/youtube-now-why-we-focus-on-watch-time.html].

[14] Gomez-Uribe and Hunt (2015) p.7.

[15] Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi (2017) ‘Returning is Believing: Optimizing Long-term User Engagement in Recommender Systems’, CIKM '17 Proceedings of the 2017 ACM on Conference on Information and Knowledge Management [https://dl.acm.org/citation.cfm?id=3133025].

[16] Carlos A Gomez-Uribe and Neil Hunt (2015) ‘The Netflix Recommender System: Algorithms, Business Value, and Innovation’, ACM Transactions on Management Information Systems, 6(4) [https://dl.acm.org/citation.cfm?id=2843948], p.5.

[17] Ashley Rodriguez (2018) ‘YouTube's recommendations drive 70% of what we watch’, Quartz [https://qz.com/1178125/youtubes-recommendations-drive-70-of-what-we-watch].

[18] Ulrich Dolata (2017) ‘Apple, Amazon, Google, Facebook, Microsoft: Market concentration - competition - innovation strategies’, Stuttgarter Beiträge zur Organisations- und Innovationsforschung, SOI Discussion Paper, No. 2017-01 [https://ideas.repec.org/p/zbw/stusoi/201701.html].

[19] For example, Zoë Beery (2019) 'How YouTube reactionaries are breaking the news media', Columbia Journalism Review [https://www.cjr.org/analysis/youtube-breaking-news.php]; Jessie Daniels (2018) 'The Algorithmic Rise of the "Alt-Right"', Contexts, 17(1) [https://journals.sagepub.com/doi/10.1177/1536504218766547]; Samantha Bradshaw and Phillip N Howard (2018) ‘Why Does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life’, Oxford Internet Institute / Knight Foundation [https://comprop.oii.ox.ac.uk/research/working-papers/why-does-junk-news-spread-so-quickly-across-social-media/]; Jonas Kaiser (2018) 'How YouTube helps to unite the Right', Alexander von Humboldt Institute for Internet and Society - Digital Society Blog [https://www.hiig.de/en/how-youtube-helps-to-unite-the-right]; Adrienne Massanari (2017) ‘#Gamergate and The Fappening: How Reddit is algorithm, governance, and culture support toxic technocultures’, new media & society, 19(3), pp.329-346 [https://journals.sagepub.com/doi/abs/10.1177/1461444815608807]; Derek O'Callaghan, Derek Greene, Maura Conway, Joe Carthy, Pádraig Cunningham (2014) ‘Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems’, Social Science Computer Review, 33(4), pp.459-478 [https://journals.sagepub.com/doi/abs/10.1177/0894439314555329?journalCode=ssce].

[20] For example, Renee DiResta (2019) 'How Amazon's Algorithms Curated a Dystopian Bookstore', Wired [https://www.wired.com/story/amazon-and-the-spread-of-health-misinformation]; Paul Lewis (2018) ''Fiction is outperforming reality': how YouTube's algorithm distorts truth', The Guardian [https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth]; Caroline O'Donovan, Charlie Warzel, Logan McDonald, Brian Clifton, and Max Woolf (2019) 'We Followed YouTube's Recommendation Algorithm Down The Rabbit Hole', Buzzfeed News [https://www.buzzfeednews.com/article/carolineodonovan/down-youtubes-recommendation-rabbithole]; John C Paolillo (2018) ‘The Flat Earth Phenomenon on YouTube’, First Monday, 23(12) [https://firstmonday.org/ojs/index.php/fm/article/view/8251/7693]; Matt Reynolds (2019) 'Think Facebook has an anti-vaxxer problem? You should see Amazon', Wired [https://www.wired.co.uk/article/facebook-anti-vaccine-disinformation]; Kelly Weill (2018) 'How YouTube Built a Radicalization Machine for the Far-Right', The Daily Beast [https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate]; Julia Carrie Wong (2019) 'How Facebook and YouTube help spread anti-vaxxer propaganda', The Guardian [https://www.theguardian.com/media/2019/feb/01/facebook-youtube-anti-vaccination-misinformation-social-media].

[21] For a review of the literature, see Jennifer Cobbe and Jatinder Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ [https://ssrn.com/abstract=3371830], pp.20-28.

[22] Brandy Zadrozny (2019) 'Drowned out by the algorithm: Vaccination advocates struggle to be heard online', NBC News [https://www.nbcnews.com/tech/tech-news/drowned-out-algorithm-pro-vaccination-advocates-struggle-be-heard-online-n976321].

[23] Josephine B Schmitt, Diana Rieger, Olivia Rutkowski, and Julian Ernst (2018), ‘Counter-messages as Prevention or Promotion of Extremism?! The Potential Role of YouTube’, Journal of Communications, 68 [https://academic.oup.com/joc/article/68/4/780/5042003].

[24] Jianshu Weng, Ee Peng Lim, Jing Jiang, and Qi He (2010) 'Twitterrank: Finding topic-sensitive influential Twitterers', Proceedings of the Third ACM International Conference on Web Search & Data Mining: February 3-6, 2010, New York, pp. 261-270 [https://dl.acm.org/citation.cfm?id=1718520]; M D Conover, J Ratkiewicz, M Francisco, B Goncalves, A Flammini, and F Menczer (2011) 'Political Polarization on Twitter', Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media [https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/viewFile/2847/3275]; Antoine Boutet, Hyoungshick Kim, and Eiko Yoneki (2012) 'What’s in Twitter: I know what parties are popular and who you are supporting now!’, Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining, pp.132–129 [https://ieeexplore.ieee.org/document/6425772]; Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler (2017) 'Partizanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election', Berkman Klein Center for Internet & Society Research Publication No. 2017-6, p.71 [https://cyber.harvard.edu/publications/2017/08/mediacloud]; Elanor Colleoni, Alessandro Rozza, and Adam Arvidsson (2014) 'Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data', Journal of Communication, 64, pp.317-332 [https://onlinelibrary.wiley.com/doi/10.1111/jcom.12084]; Eytan Bakshy, Solomon Messing, and Lada A Adamic (2015) ‘Exposure to ideologically diverse news and opinion on Facebook’, Science, 348(6239) [https://science.sciencemag.org/content/348/6239/1130]; Seth Flaxman, Shared Goel, Justin M Rao (2016) ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly, 80(S1) [https://academic.oup.com/poq/article/80/S1/298/2223402].

[25] Richard Fletcher and Rasmus Kleis Nielsen (2017) ‘Are News Audiences Increasingly Fragmented? A Cross-National Comparative Analysis of Cross-Platform News Audience Fragmentation and Duplication’, Journal of Communication, 67(4) [https://onlinelibrary.wiley.com/doi/abs/10.1111/jcom.12315]; see also Frederik Zuiderveen Borgesius, Damian Trilling, Judith Moeller, Balázs Bodó, Clas H. de Vreese, and Natalie Helberger (2015) ‘Should We Worry About Filter Bubbles?’, Internet Policy Review, 5(1a), pp.3-5 [https://policyreview.info/articles/analysis/should-we-worry-about-filter-bubbles]; Judith Möller, Damian Trilling, Natali Helberger, and Bram van Es (2018) ‘Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity’, Information, Communication & Society, 21(7), pp.959-977 [https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1444076]; Mario Haim, Andreas Graefe, and Hans-Bernd Brosius (2018) ‘Burst of the Filter Bubble?’, Digital Journalism, 6(3), pp.330-343 [https://www.tandfonline.com/doi/abs/10.1080/21670811.2017.1338145].

[26] Mark Leiser (2016) ‘AstroTurfing, 'CyberTurfing' and other online persuasion campaigns’, European Journal of Law and Technology, 7(1) [http://ejlt.org/article/view/501].

[27] Mark Leiser (2016) ‘AstroTurfing, 'CyberTurfing' and other online persuasion campaigns’, European Journal of Law and Technology, 7(1) [http://ejlt.org/article/view/501]; for a review of the literature, see Rose Marie Santini, Larissa Agostini, Carlos Eduardo Barros, Danilo Carvalho, Rafael Centeno de Rezende, Debora G Salles, Kenzo Seto, Camyla Terra, and Giulia Tuccy (2018) ‘Software Power as Soft Power: A literature review on computational propaganda and political process’, PArtecipazione e COnflitto, 11(2) [http://siba-ese.unisalento.it/index.php/paco/article/view/19546]; see also Samantha Bradshaw and Phillip N Howard (2017) ‘Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation’, Computational Propaganda Research Project Working paper no 2017.12, Oxford Internet Institute [https://comprop.oii.ox.ac.uk/research/troops-trolls-and-trouble-makers-a-global-inventory-of-organized-social-media-manipulation/]; Bence Kollyani, Philip N Howard, and Samuel C Woolley (2016) ‘Bots and Automation over Twitter during the U.S. Election’, Data Memo 2016.4. Oxford, UK: Project on Computational Propaganda [https://comprop.oii.ox.ac.uk/research/working-papers/bots-and-automation-over-twitter-during-the-u-s-election/]; Emilio Ferrara (2017) ‘Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election’, First Monday [https://firstmonday.org/ojs/index.php/fm/article/view/8005/6516]; Alessandro Bessi and Emilio Ferrara (2016) ‘Social bots distort the 2016 U.S. Presidential election online discussion’, First Monday, 21(11) [https://firstmonday.org/article/view/7090/5653]; Muhammad Nihal Hussein, Serpil Tokdemir, Nitin Agarwal, and Samer Al-Khateeb (2018) ‘Analyzing Disinformation and Crowd Manipulation Tactics on YouTube’, 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) [https://ieeexplore.ieee.org/document/8508766].

[28] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Official Journal L 178, 17/07/2000 P. 0001 – 0016) (‘E-Commerce Directive’).

[29] See, e.g., Jennifer Cobbe and Jatinder Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ [https://ssrn.com/abstract=3371830], pp.28-31.

[30] Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services (Official Journal L 186, 11/7/2019, P. 57–79).

[31] Zeynep Tufekci (2016), ‘As the Pirates Become CEOs: The Closing of the Open Internet’, Dædalus, the Journal of the American Academy of Arts & Sciences, 145(1), p.74 [https://www.mitpressjournals.org/doi/abs/10.1162/DAED_a_00366?journalCode=daed]; Natascha Just and Michael Latzer (2017) ‘Governance by algorithms: reality construction by algorithmic selection on the Internet’, Media, Culture & Society, 39(2), pp.238-258 [https://journals.sagepub.com/doi/abs/10.1177/0163443716643157?journalCode=mcsa]; James G Webster (2010) ‘User Information Regimes: How Social Media Shape Patterns of Consumption’, Northwestern University Law Review, 104(2); Zeynep Tufekci (2015) ‘Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency’, Colorado Technology Law Journal, 13, pp.207-208. [https://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf]; Taina Bucher (2012) 'Want to be on top? Algorithmic power and the threat of invisibility on Facebook', New Media & Society, 14(7), pp.1164-1180 [https://journals.sagepub.com/doi/abs/10.1177/1461444812440159]; Taina Bucher (2017) 'The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms', Information, Communication & Society, 20(1), pp.30-44 [https://www.tandfonline.com/doi/abs/10.1080/1369118X.2016.1154086].

[32] For elaboration on these principles, see Jennifer Cobbe and Jatinder Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ [https://ssrn.com/abstract=3371830], pp.34-43.