Dr Claire Hardaker – written evidence (DAD0035)

 

1. How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect?

(1)       It is difficult to envision how one might rigorously and objectively assess whether digital technology has brought with it greater benefits or drawbacks to UK democracy, but it is clear that the digitisation of democracy has certainly had profound impacts across multiple dimensions, including formal political process, informal diplomatic soft power, media actions and reactions, and perhaps most fundamentally of all, the social fabric of inclusion and debate at the level of the ordinary individual. For some these changes will have brought benefits, given them a voice, and shone light on necessary topics, whilst for others, these consequences will have had just the opposite effect.

(2)       Just one concrete example of this is thus: social media platforms have given ordinary individuals the capacity to conduct their own political campaigns (cf. Led by Donkeys, the People’s Vote campaign, and so forth) with extraordinary levels of success. Through a successful Twitter and online campaign, for instance, Led by Donkeys was able to bring attention to a topic they considered crucial, e.g. the honesty of politicians, but that the media was otherwise unwilling to expend much focus on due to the general weary acceptance of, and even boredom with stories about politicians misleading the public. By posting giant tweets of statements made by politicians which conspicuously contradicted their present positions, and then posting images of these online, Led by Donkeys was able to amass a substantial following, leverage crowd-sourced funding, and in the space of six months, rapidly accelerate public debate on issues surrounding honesty, integrity, and the general coherence of the planning and preparation around Brexit.

(3)       However, such grassroots campaigns are also apt to be spoofed via “astroturfing” – that is, a purportedly grassroots campaign that is in fact funded, organised, or otherwise managed by an interested domestic organisation such as a political party. And of course, such movements can be engineered from abroad, as evidenced in the Report on the Investigation into Russian Interference in the 2016 Presidential Election by Robert S. Mueller III, thus:

The IRA organized and promoted political rallies inside the United States while posing as U.S. grassroots activists. First, the IRA used one of its preexisting social media personas (Facebook groups and Twitter accounts, for example) to announce and promote the event. The IRA then sent a large number of direct messages to followers of its social media account asking them to attend the event. From those who responded with interest in attending, the IRA then sought a U.S. person to serve as the event's coordinator. […] The Office identified dozens of U.S. rallies organized by the IRA. The earliest evidence of a rally was a "confederate rally" in November 2015.              (Mueller 2019: v1, p29)

(4)       As is often the case, the technology itself is morally agnostic, and will be as beneficial or as mendacious as the individuals who use it.

2. How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?

(5)       Social media algorithms are, rightly, a serious matter. They learn from our preferences, create echo chambers that provide us with more content that we like, filter out content we dislike, and in so doing, they arguably “radicalise” us across a range of topics and attitudes, however benign the aspects we are being radicalised into, from amusing animal memes to appalling videos of gratuitous violence. The net result is that our online environment reinforces an increasingly narrow view of the world, which in turn makes us less culturally, socially, and ideologically literate.

(6)       Algorithm design should be entirely transparent for many reasons. Algorithms are written by humans (who are biased), they are trained on data produced by humans (which is full of biases), they provide results that humans interpret (with bias), and their “services” (that are full of bias) are then provided to humans to consume. Those humans will then go on to reproduce the bias to which they are exposed. Thus algorithms that attempt to curate content for preference are fundamentally reinforcing and amplifying pre-existing biases.

(7)       Moreover, as mentioned above about technology in general, algorithms are morally agnostic and the consequences of their use will be imbued with the intentions of their user. Just as one could use such algorithms to analyse social inequality, target medical resources, and support vulnerable populations, organisations such as Cambridge Analytica/SCL and AIQ can use the same algorithms to inundate groups with low educational attainment levels, serious medical concerns, and other vulnerable populations with fear-based propaganda and disinformation. In short, transparency of the algorithm itself is relatively trivial. It would be far more illuminating for users to be able to switch off or turn down algorithms on social media platforms or be able to switch on a counter-view where content they would not typically see is shown to them, and in general, for any subsequent off-platform uses of algorithm-derived results to be made publicly available for proper scrutiny.

3. What role should every stage of education play in helping to create a healthy, active, digitally literate democracy?

(8)       There are two crucial notes here before I turn to a possible answer: firstly, education is unlikely to have a substantive impact on malicious misinformation (that is, disinformation) unless that education also incidentally happens to deal with the systemic issues or problems that have encouraged that individual to turn to disinformation in the first place.

(9)       Secondly, education is only likely to help in the naïve spread of misinformation in limited cases. To use a well-worn phrase, it is difficult to reason a person out of a position they were never reasoned into in the first place. In turn, misinformation is primarily a product of, and parasite upon, cognitive, cultural, and emotional biases. It provides views, attitudes, and ideologies that appeal not to reason or logic, but to fears and aspirations. Fighting misinformation with education is somewhat akin to fighting a fire with a thoughtful description of water. Education will help where the individual is already open-minded, but this subsection of the population is likely disappointingly small. Just as social media algorithms serve up the content the user most prefers and deviation from this will likely breed discontent, so too does misinformation serve up a version of the world most palatable to that individual’s perspectives and beliefs. Not only do we resist being argued out of our preconceived beliefs and values, attempts to force us out of them can even make us cling to them more vehemently than we did in the first place, even in the face of concrete evidence. In different words, for some, education – particularly if it is enforced or semi-enforced – will even have a negative impact on information literacy. For many who have already been exposed to disinformation for years now, information literacy via education is, in crude terms, a process of deradicalisation. For such people, sources telling them that they are wrong will likely be viewed as in league with The Enemy (whatever that enemy is – Big Pharma, the Deep State, the BBC, etc.), and overcoming that cognitive catch-22 is an especially difficult, delicate process. I refer the interested reader here to Brandolini’s Law, which summarises this whole paragraph in a much more amusing single sentence. In short, there is no single answer to this complex problem, but a comprehensive approach across multiple dimensions is likely to yield the most effective results, thus.

(10)    Children and young adults: In a perfect world, we would always choose prevention over cure, and whilst I welcome the fact that online safety was introduced across the 2014 National Curriculum, it is now more than time to introduce information literacy – not merely for academic purposes, but for the purposes of being a well-adjusted social citizen. For best effect, such lessons need to be appropriately supported, sufficiently detailed and thorough, and adopted across all schools (private, public, and free). These lessons also should not be restricted purely to Computing, but should be adopted across a number of classes, including PSHEE and Citizenship. However, in order for this to be successful, teachers themselves need to be well-trained in the safety measures, in tell-tale signs, and in appropriate strategies of intervention. Teachers also need the support of parents and caregivers, and this leads into a second area of improvement.

(11)    Adults, including parents, caregivers, and teachers: School-based education could be substantially underpinned with compulsory information literacy training in all PGCEs. We know that NSPCC research shows that far too many teachers currently don't feel confident when it comes to advising pupils on safe social network practices, so it stands to reason that they will feel equally uncertain around information literacy. Meanwhile, parents would benefit from proactive, face-to-face training in online misinformation, signs of radicalisation, and more broadly, ways to keep children safe online (technologically and behaviourally).

(12)    Site tools, responsibilities, and processes: We generally expect social networks to offer the tools and support to allow us to exclude damaging and harmful content, but disinformation falls into a grey area between potentially harmful, and freedom of expression. Sites also differ widely in the availability, sophistication, and transparency of their policies on disinformation, algorithms, and political processes. At the most basic level, social networks should:

(a)       allow users to control their own environment (e.g. switching algorithms on or off or turning them down)

(b)       proactively use their own management tools (e.g. IP blocks)

(c)       draw from a centralised database of trust that provides a score for the veracity of various online sources, such that, for instance, trusted sources could be various shades of green (darker for better), yellows and oranges could be used for biased sites, and shades of red could indicate known sources of misinformation

(13)    Legislation and guidance: Where misinformation and online activities, such as campaign finances, breach electoral, civil, or criminal law, there should be necessary and sufficient legislation to deal with the matter. The guidelines on these should be informed by:

(a)       the speed with which content can be reproduced;

(b)       the breadth and vulnerability of audience that can be reached;

(c)       the inability of targets to distinguish malicious misinformation from verified news;

(d)       the corrosive effect that such activities have on the faith in the democratic process.

4. Would greater transparency in the online spending and campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like?

(14)    Yes. This is not my area of expertise so I will leave it to others to tackle.

5. What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising?

(15)    Political advertising can be extremely nefarious, especially where it is based on psychologically-profiled hyper-targeted groups whose digital footprint has revealed a particular vulnerability to certain trigger ideas or events. In many respects, this is little different than misinformation, in that the sender intends to manipulate the behaviours of the recipient, and does so via emotional rather than rational means. Political advertising, therefore, should be far more comprehensively regulated, particularly for fear- and intimidation-based methods of “selling” or promoting certain parties and individuals.

6. To what extent does increasing use of encrypted messaging and private groups present a challenge to the democratic process?

(16)    As mentioned multiple times now, technology is morally agnostic and is no better or worse than the people who use it. Such services allow for safe discussion to take place amongst otherwise marginalised groups, and they allow plans for awful acts of violence amongst criminals. The issue is not the medium. It is the person using it, and what has brought them to that place in their life where they might feel that they need to use an encrypted service for a deeply abhorrent purpose.

7. What are the positive or negative effects of anonymity on online democratic discourse?

(17)    It is crucial to note that anonymity can result in both benign, and toxic disinhibition. For the purposes of this report, and because a later question addresses on it, I will focus on the latter, but the benefits of the former absolutely must not be overlooked.

(18)    The internet offers a perceived anonymity that has no real parallel offline, and this appearance of invisibility encourages a phenomenon known as the Gyges Effect – the user feels that they can do unpleasant things with a highly reduced risk of suffering any consequences. In turn, the sense of being invulnerable encourages disinhibition: the user thinks that she is safe from repercussions, so behaves in a mildly objectionable way. Each time there is no negative consequence, her behaviour may gradually escalate. By itself, however, anonymity does not explain why the internet seems to bring the worst out in some, since many of us are online and anonymous all the time, yet would never think to behave in this way. A second issue to consider, then, is detachment.

(19)    Linked to disinhibition, detachment is the way that the internet allows users to shut down their empathy, and in particular, their compassion. In fact, we can choose not to empathise at will, especially when dealing with someone we dislike. The internet, however, drastically increases this ability, and allows users to emotionally distance themselves—not just from people they don't like but also from those they don't even know—in several ways:

(20)    Dehumanising: Because we lose many indications of emotional response (eye contact, tone of voice, facial expressions) it is easier to "forget" that we are communicating with another human. Instead, we have only the words and/or images sent back and forth.

(21)    Devaluing: The above means that we can downplay any emotional reaction. If a recipient claims to be offended or hurt, because that reaction is unseen, it is easier to believe that they are lying or exaggerating. It is also easier to quickly forget the incident, whilst it may linger in the recipient’s mind for days.

(22)    Victim-blaming: It can be easier for an abusive online user to shift the responsibility for what they are doing from themselves, and blame their victim for what is happening, because, e.g., they're a famous politician.

(23)    Self-vindicating: It can also easier to diminish the severity of these behaviour. Because we each inhabit our own heads, it seems obvious to us just how seriously we intended a certain statement, so when abusive online users are confronted, we tend to hear excuses like, "Stuff said on the internet isn't real. I was just joking. I obviously didn't really mean that I was going to kill her!" Our innate bias means that we are more likely to adopt such a self-supporting viewpoint than to judge our own behaviour as reprehensible.

8. To what extent does social media negatively shape public debate, either through encouraging polarisation or through abuse deterring individuals from engaging in public life?

(24)    See the response to the above question. Note, however, that social media is not the same as abusive online users. Social media itself, as noted in the earliest questions, can have be fundamentally democratic and a powerful voice for the voiceless. It can also be a weapon of mass disruption for the nefarious. This answer therefore focuses primarily on abusive users and not so centrally on the platform or device that they use. Those platforms do have their role to play, yes, but they are not the sole arbiters of public debate.

(25)    Online abuse absolutely has an impact on deterring individuals, and especially those from non-white, non-male, non-heterosexual backgrounds from entering public life. The consequences of even a minor online debate can reach every corner of one’s life, from family and friends to schools and employers, as malicious users attempt to find or fabricate damaging content and proliferate it across the internet at large.

(26)    Saturation into the common consciousness - tiger in the market square; Sucking the air out of the room; Disenfranchising - Crisis exhaustion - angry boredom; Democracy requires a shared reality; Pejorating a characteristic (e.g. "socialism") and then assigning it ever more broadly to less relevant groups; Attacks against people/figures rather than policies, a la the courtroom; Polarisation leads to a hardening of positions and a removal of opportunity for compromise

9. To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy?

(27)    As per para. 1, the first part of this question is difficult to objectively answer, and subjective experience is prone to being skewed by the news cycle and personal biases. However, evidence suggests, taking these caveats into account, that there are indeed strong efforts globally to roll out substantial information operations programmes aimed at undermining democratic processes both within and across borders. Combatting this is not simple. I refer the reader back to paras. 8 and 9. Education and literacy may help certain sections of society, and countries where digital literacy is generally lower, but disinformation and its unfortunate offspring, misinformation, are driven and perpetuated by other motives and goals, e.g. a desire to maintain or increase power, emotional investment in certain ideologies, and so forth. Reactively fighting such forces will always be difficult. By contrast, proactively strengthening the faith in democratic processes can begin at home, and immediately, in both the larger events and smaller day-to-day matters. This would be achieved through politicians exhibiting principled behaviour, stately rhetoric, calm and reasoned responses, and so forth. Especially where money enters politics, an absolute commitment to transparency and an unyielding intolerance for corruption would be extremely welcome. Research suggests that, with the advent of the internet, the overwhelming immediacy and access of information has led to a general decline in trust, whether in bankers because of subprime mortgages or politicians because of expenses scandals or the media through phone hacking. One can roll out short-term strategies to deal with this, but ultimately, the only long-term protection against distrust is trustworthy and honourable behavioura strong aversion to unnecessary secrecy, a marked disinclination for covering up failures, a willingness to acknowledge wrongdoings, and so forth. Though difficult, as we would teach our children, these are the ways forward that inspire trust and at the same time, such actions paint a realistic picture of what any one individual aspire to be like.

10. What might be the best ways of reducing the effects of misinformation on social media platforms?

(28)    See the answer to Q3 (paras. 8 and 9) above.

11. How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish?

(29)    There is no easy answer to this question. Automated means will produce false positives (banning those who are innocent) and false negatives (allowing those who are problematic). Manual means are simply unfeasible, and are no less prone to false positives and negatives anyway. One step forward might be to bring social media platforms into the remit of an organisation like Ofcom, or to create legislative frameworks that oblige platforms who wish to operate in this country to submit to a certain degree of transparency. Additionally, prompting such platforms to make available to researchers the data they have collected from problematic accounts would be extremely productive of external, independent research into the means to identify and tackle such problems. Similarly, encouraging an atmosphere of trust and fostering partnerships between academics, civil servants, the Government, and social media platforms would lead toward longer term solutions and better understandings of the problems faced on all sides. There appears to be a tendency, at the moment, to score political points by painting all social media platforms as negligent or even criminally complicit. Given the work done by some platforms, this seems both disingenuous and counterproductive.

12. How could the Government better support the positive work of civil society organisations using technology to facilitate engagement with democratic processes?

(30)    Not my area of expertise.

13. How can elected representatives use technology to engage with the public in local and national decision making? What can Parliament and Government do to better use technology to support democratic engagement and ensure the efficacy of the democratic process?

(31)    Not my area of expertise.

14. What positive examples are there of technology being used to enhance democracy?

(32)    The government’s online petitions site is especially excellent.

Conclusion

(33)    The full potential of the internet is yet to be realised. If we are to fully enjoy the benefits, and reduce misinformation and abusive behaviour, my belief is that we must approach these issues comprehensively and holistically. I hope the above may offer some ways in which to achieve this.

 

8