Professor David Beer, Professor of Sociology, University of Yorkwritten evidence (FEO0120)

 

House of Lords Communications and Digital Select Committee inquiry into Freedom of Expression Online

 

 

How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

How can platform design improve digital citizenship?

 

Note: What I had to say in response to these questions was closely related, so I’ve addressed them together in the following.

 

The design of platforms and the cultures within those platforms (including the norms, expectations and ideals) have a close relationship (discussed in Beer, 2013). Improvement in digital citizenship may not solely be achieved through an alteration of platform design. It will also require an understanding of how the design of the platform facilitates or shapes the cultures of those platforms. It may be that any attempt to reduce online harms and improve digital citizenship will require tackling both the design and the cultures within these platforms. It also requires an understanding of the cultures themselves in order to understand how alterations in design may redirect them. This would require programmes that help to build up a stronger sense of shared and collective endeavour as well as finding the means to alter the design in ways that shape participation in more positive directions. A first step is to reflect on how the cultures within these platforms are connected with the design and infrastructures of platforms. I think there are four things to consider here.

 

First, these platforms often create a competitive environment in which there is a push to perform well within the logic of social media (Van Dijk, 2013; Beer, 2016) – to get attention, to get heard, to get amplified through shares and likes, to do well in the visible metrics, to have a lifestyle image that fits, to draw attention and to obtain replies, reactions, and so on. The platforms are designed to promote such competition between users. In this ‘like economy’ (Gerlitz & Helmond, 2013) the platforms encourage people to judge and compare themselves and others. This then plays out in behaviours within these spaces, with attention becoming a premium. It also creates a warped sense of the self and of others. The use of filters, for instance, enables users to create altered versions of reality, that others are then consuming (Tiggemann & Anderberg, 2019). Improving digital citizenship may require altering the platforms to avoid competition being at the centre of social interaction. This can be done through curbing the ranking of content and the metric based judgments (Grosser, 2014; Beer, 2016), but it might also require education programmes that inform users that they are seeing a narrow or particular version of the lives and views of other users.

 

Second, for many, the lack of a voice in social media can be stifling. New media have long had a problem with voice (see Couldry, 2008). A strong sense of not being heard can result from a combination of algorithmic filtering and a lack of a network through which their voice can circulate. These seemingly social spaces can create a strong sense of isolation and powerlessness (discussed in broad terms in Dean, 2009). We hear a lot about social media giving people a voice, but there are hierarchies in social media and some of the more negative aspects of online interaction are likely to result from people having a sense of voicelessness and so participating in behaviours that will get them noticed or get them heard. One way of preventing this may be to have platform design that feels fairer in terms of creating voice. This may require the network structure to alter and it might be the case that a stronger sense of how people can contribute to the discussion of topics and issues would be beneficial. A topic-based focus might help to make people feel part of a dialogue rather than pushing them toward damaging actions in order to generate a sense of voice or a feeling of being heard. Other mechanisms could also be required to help people to manage their sense of voicelessness within social media. Digital citizenship may require a clear and more critical understanding of what it is to have voice within platforms and to understand who gets heard and how. Support for voicelessness should be part of the way platforms operate – these platforms should also help people to understand what it is to be heard within social media. Above anything else, this might also prevent wider social inequalities being repeated or even exaggerated on platforms (Noble, 2018), although addressing this also requires more direct forms of action, particular around the use of data and algorithms.

 

Third, these platforms are created in order to ensure maximum participation by users. That is to say that the aim is to increase the user base whilst also keeping users engaged with the platform for as much time as possible (Chen et al, 2018). A key change required to improve digital citizenship will be to find a way to get the platforms to shift away from a preoccupation with that objective. Hooking people into the platform and aiming for maximum stickiness is itself an issue with the design of platforms. Platforms will need to change objectives, or be pushed to change their objectives. The focus on making the platforms sticky has an impact, it makes them intensive spaces in which more aspects of life are incorporated. They then become hard to leave. Improving digital citizenship may require the platform to put in measures that encourage users to have a balanced engagement with the platform. For instance, a simple change here would be for the platform to curb their desire to hook the user in to the platform and put in mechanisms to encourage the user to take a break from the platform.

 

Fourth, and finally, these platforms are often designed to encourage users to share intimate aspects of their everyday lives (see for example, Miguel, 2016). This is partly because such data is very valuable. This is not always of benefit to the user, plus it locks them into constant judgments about what aspects of their lives to share. The platforms should be designed to help people to make informed choices about what they post about themselves rather than pushing them to reveal as much as possible.  Guidance could be built into the platform to help informed choices to be made. These platforms could also be designed to notice certain types of details that people are posting and double-check with them if they wish to post it (offering guidance on what the types of consequences might be).

 

I have not explored in this response the use of data as I understand this is being covered elsewhere. This is also a crucial aspect as it directly impacts on how people interact with platforms.

 

References

 

Beer, D. (2013) Popular Culture and New Media: The Politics of Circulation. Basingstoke: Palgrave Macmillan.

 

Beer, D. (2016) Metric Power. Basingstoke: Palgrave Macmillan.

 

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media and Society, 14(7), 1164-1180.

 

Chen, Y., Mao, Z. & Linchuan Qiu, J. (2018) Super-sticky WeChat and Chinese society. Bingley: Emerald.

 

Couldry, N. (2008) Media and the problem of voice. In: Carpentier, Nico and de Cleen, Benjamin, (eds.) Participation and media production: critical reflections on content creation. Cambridge Scholars Publishing, Newcastle, pp. 15-26.

 

Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web. New Media & Society, 15(8), 1348–1365.

 

Grosser, B. (2014). What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook. Computational Culture. http://computationalculture.net/what-do-metrics-want/.

 

Miguel, C. (2016) ‘Visual Intimacy on Social Media: From Selfies to the Co-Construction of Intimacies Through Shared Pictures’, Social Media + Society 2(2): 1-10.

 

Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

 

Tiggemann, M. & Anderberg, I. (2019) ‘Social media is not real: The effect of ‘Instagram vs reality’ images on women’s social comparison and body image’, New Media & Society 22(12): 2183-2199.

 

Van Dijck, J. and Poell, T. (2013). Understanding Social Media Logic. Media and Communication, 1(1), 2-14

 

 

May 2021

3