1. Full Fact has almost a decade of experience fact checking claims made by politicians, public figures and in the press, and we have focused our efforts on those that pose the most potential harm to the public. 
  2. One of the main risks we see is to democracy: as misinformation spreads rapidly online, and as people find it harder to tell what’s true from what’s not, it is more important than ever that there are sources of reliable, accurate information on which they can base their important decisions. 
  3. As a fact checking organisation, we hope to provide some of this for the public, but there is more that the government and internet companies could do to help ensure good information rises to the top. Our submission aims to highlight immediate and long-term changes that would improve democratic engagement.
  4. In particular, we emphasise the urgent need for simple changes that can provide great benefits:
  1. We also stress the importance of equipping people with the skills and tools they need to challenge what they see online themselves, with education for all ages and social groups being key to this.
  2. We thank the committee for the chance to respond to this broad ranging consultation. We would be happy to provide oral evidence, if that would be helpful.



1. How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect? 

  1. Digital technology has undoubtedly had both positive and negative effects on the way democracy works in the UK, but there are few suitable ways for organisations to measure the effect of anything on democracy in the UK; those that exist are either unavailable to everyone or held within technology companies.
  2. For instance, despite widespread efforts to increase voter turnout, the marked electoral register is only open to the British Election Study. This means it isn’t possible for civil society groups or researchers to test or assess methods to improve turnout. In other countries this is normal and beneficial. Facebook runs an “I voted” poll every election, which actually gives them insight into whether certain ads pushed someone to vote, and they can compare that data year on year. We would like to see wider access granted to specific groups or academics.
  3. More broadly, these issues are still evolving and it will be many years before the overall effect of any technological changes on society can be properly assessed. Even then, it may not be possible. In the interim, we would urge parliament to build up its knowledge in the area, in order to better understand the ways in which technology could influence democracy, to ensure it can legislate effectively and in a timely way to mitigate any associated risks. It is not necessary to have a perfect answer before you make improvements to the current systems.
  4. It is safe to say that democracy has not kept up with digital technology. The most basic democratic digital needs are not met online. For instance, push notifications to remind people where their polling stations are, or a list of candidates that are standing for election. We have seen civil society groups plug this gap, and they are doing a heroic job on little funding, but ultimately these are abilities that are an essential part of a modern electoral system that government should guarantee.


2. How have the design of algorithms used by social media internet companies shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?

  1. It is difficult to fully understand the impact of algorithms on democratic debate, in the short or long term, without first gathering more information from the internet companies. We have called for greater transparency from internet companies in the algorithms they develop and use. Allowing external scrutiny will ensure that any tools developed are ethical, fair and responsible. We understand there is a balance to be struck between protecting firms’ trade secrets and accountability, but believe there should be a shift towards the latter wherever possible. 
  2. In particular, we have called for internet companies to provide more data to academics and fact checking organisations to allow for independent evaluation and greater transparency. Voluntary efforts do not go far enough and provide only piecemeal access to some academics – and, according to recent reports[i], one high-profile plan to share Facebook data with researchers has been criticised by the programme’s funders. 
  3. Some of our own experience of working alongside an internet company, through Facebook’s Third Party Fact-Checking scheme, has demonstrated the difficulty in getting detailed data from the company. As we said in our first transparency report on the scheme[ii], we want Facebook to share more data with fact checkers, so we can better evaluate both the content we are checking and our impact.
  4. The data that is used to train a model is an integral part of any machine learning algorithm. Algorithms fed by bad data will struggle to positively influence democratic debate; we believe the data that is fed in should be accurate, lacking in undue bias, and in many cases externally accountable. When it is not it can have severe negative consequences, either on society as a whole, or on particular groups of people. In our experience, building world-leading artificial intelligence tools in this sensitive field while avoiding unintended harms requires constant care and attention. We are not confident that every company operating in this field takes that amount of care or even fully understands the risks.



3. What role should every stage of education play in helping to create a healthy, active, digitally literate democracy? 

  1. It is hard to overstate the importance of education in tackling the challenges facing democracy, especially in a digital age. Our view is that education aimed at supporting democracy should cover everything from the very fundamentals of democracy – how and where to vote – as well as preparing people for the way democratic debate takes place online. 
  2. As an educational charity, we are supportive of media and digital literacy programmes, especially those that seek to target a broad range of ages and social groups. We have our own toolkit[iii] to help people spot false news and challenge what they see online, which has been used in 28 countries. 
  3. Full Fact’s chief executive is also a member of Ofcom’s Making Sense of Media programme, which has the stated aim of improving the online skills, knowledge and understanding of both adults and children, through collaborations with stakeholder groups and research into the area.
  4. We would emphasise the latter point to the committee: it is crucial that the UK develops a rigorous research base to complement any education efforts and ensure that programmes provide relevant information that is of practical use. We strongly believe that the public should be given the tools and skills they need to make their own decisions and navigate the digital environment.
  5. We also note that there are many education and training schemes in digital literacy on offer across the UK, from government, profit and non-profit voluntary organisations, but they are currently fragmented. Better mapping of these efforts, sharing of best practice and coordination and collaboration would be a positive and arguably cost-effective first step to improving the country’s digital literacy. There could be a role here for the government or the aforementioned Ofcom programme. 
  6. Indeed, the government’s recent guidance for teaching online safety in schools[iv] contains a list of other resources, which is a good first step. However, it should be remembered that, while education for children is important, training must go beyond schools and colleges and look to reach adult populations for it to have an immediate impact on wider society.
  7. There is an obvious candidate to do more in providing independent information to voters. The Electoral Commission should be as trusted an entity on voting and voter information, as the NHS is on health. It needs to be funded to answer this need.


Online campaigning 

4. Would greater transparency in the online spending and campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like? 

5. What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising? 

  1. We will address questions 4 and 5 together. 
  2. Greater transparency in online spending and campaigning is fundamental to the proper functioning of the UK’s democracy, and reforms are long overdue. In saying this we are echoing repeated recommendations from the Electoral Commission[v], Select Committees[vi], and others.
  3. It is impossible to know for sure what effects online targeted advertising will have in the future – arguably, the full effects on the current political process are also unknown. It’s also impossible to know if the same internet companies that are often the focus of debate today will be the same in the future – governments must bear this in mind and avoid creating regulations defined just for Facebook, Twitter and Google. 
  4. What we can say with certainty is that digital campaigns are different to offline ones. They generate millions of ads that are capable of being targeted at small, specific groups of people – microtargeting – meaning that, unlike in the offline world, no two people experience an election in the same way. Campaigners now run multiple versions of the same ad, rapidly testing them to identify which works best for which group, and spreading those that generate the most online engagement. During the 2016 US elections, the campaign for President Donald Trump is reported to have run 5.9 million different versions of ads[vii].
  5. The government has been told repeatedly, for many years, that the UK’s electoral laws are no longer fit for purpose and need to be updated for a digital age. The Electoral Commission has been calling for the imprint rule that covers print materials to be extended to online adverts since 2003, and we are in full support of these requests.
  6. The government itself has recognised the risks created by allowing elections to continue to operate without greater checks on online advertising, saying that extending the imprint rules to digital communications was “essential for promoting fact-based political debate and tackling disinformation online”[viii]. 
  7. Unfortunately, these requests have not been acted on with the necessary urgency, and the UK is now facing another election in which online political advertising is inadequately regulated, meaning the vote will not be protected from abuse.
  8. In addition, the government has yet to publish the technical details of these proposals, meaning there is still the risk that even when they are implemented, they won’t go far enough. Full Fact believes the best way to ensure that the new rules are fit for purpose – and allow both the campaigns and the people running them to be held to account – is to provide more granular details about adverts, both to voters and in a machine-readable database.
  9. This should be in the form of a publicly available database - updated in real-time in a machine-readable format - that contains details on each ad’s content, which groups or people it is targeted at, who it actually reached, how many versions of it are available, and how much it cost. 
  10. This is crucial because, without a complete database listing every advert in real-time, it will be almost impossible for observers to keep up, making it hard to scrutinise claims and preventing the public from seeing the full picture. And if that database isn’t provided in a machine readable format, journalists and organisations like Full Fact won’t be able to properly analyse the information it contains.
  11. We have also asked for any new rules to be in place at all times. This is because campaigning is no longer confined to the pre-election period, as current events are demonstrating. A recent analysis[ix] showed that the UK’s political parties have spent roughly £1 million on partisan Facebook ads alone since mid-June, despite no election having been called.   
  12. In the same way that data on political adverts is only useful if it is delivered in real-time, information on overall campaign spending can only be truly useful – and offer real protections for democracy – if it is made public in real-time. As a society, we should not accept the status quo, which has seen the Electoral Commission grapple with spending in the EU referendum for three years. This demonstrates exactly why the Electoral Commission needs better funding and a strong tech team to develop the tools necessary to monitor spending as it happens so that any fraud or misuse is caught before it has effects on the overall outcomes of elections or referendums.
  13. We acknowledge that the internet companies have made efforts to increase transparency, but these do not go far enough. They fail to provide detailed or specific information, for instance on who has been targeted or reached; moreover, they can only ever provide a disjointed picture, as each company has taken slightly different steps. It is also crucial that the government does not rely on US internet companies, which can change their tools or rules when they like, to ensure election transparency.
  14. It is also worth noting that the UK is also falling behind other countries in tackling this issue; a recent letter Full Fact sent to all parliamentarians highlights Canada’s efforts, and we would recommend the committee looks in detail at its work on Protecting Democracy[x]. 


Privacy and anonymity 

6. To what extent does increasing use of encrypted messaging and private groups present a challenge to the democratic process? 

7. What are the positive or negative effects of anonymity on online democratic discourse? 

  1. The use of private communications to spread misinformation is already a major challenge that some countries and their fact checking organisations have to respond to. For example, some of our international colleagues are already looking into the best way to counter misinformation at source by increased engagement on messaging internet companies like WhatsApp[xi] [xii] [xiii]. 
  2. It is likely that these companies could make it easier for institutions to engage with users who are spreading misinformation, but we would caution against a knee-jerk reaction from governments that risk damaging free speech. Our approach would be to provide users with the ability to spot false news and offer them more information so they can make better informed decisions, rather than shutting down debate. 
  3. We would urge the committee to consider that anonymity and the ability to communicate via encrypted messages is relied upon by many, for instance whistle-blowers or those in less democratic regimes. Lawmakers should be aware that what the UK does in this space will be watched closely elsewhere in the world. 
  4. We would also note that there is an acceptance that peer to peer conversations and debate in small groups happens in the offline world, and that these are not policed as a matter of course. Just because something can be monitored or acted against, doesn’t mean that government should act. 


Democratic debate 

9. To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy? 

  1. There is clear evidence of the fact of malicious actors using all forms of communication available to them to undermine trust in the democratic process. Assessing the extent of this, or attributing specific actions to specific origins, is much more complicated. 
  2. We recommend that the internet companies make reporting processes much more straightforward, and publish clear processes that help users understand what will happen if they report content. This should also cover more transparent engagement with independent experts and with the government and public bodies such as the emergency services.
  3. We also believe that companies could do more to identify malicious activity. One specific extension of this idea would be to explore the possibility of labelling bots as such so people can quickly and easily see that and decide whether and how to engage. If it was a prerequisite for people registering a bot account to make it clear that it was a bot, there would be a clear breach of terms and conditions if someone did not do this, giving the internet companies a clear reason to take action.
  4. There should also be government-led efforts to identify single sources of good, trustworthy information on democracy, which cover fundamental questions about voting and debunk common misinformation about elections. In this field there is too much emphasis on tackling bad information, and not enough emphasis on shining lights through the fog of uncertainty.
  5. Having fact checked three general elections and a referendum, it is notable that we face the same kinds of misinformation each time. One example is posts saying you shouldn’t vote with pencils because they will not be considered valid and can be rubbed out – this spreads distrust in both the system as a whole and those counting the ballots. Another, more worrying example for people’s ability to have their say, is the “advice” that people who want to vote for one party should do it on one day, and that if they want to vote for another, they should do it a day after. 
  6. The NHS website ( is a good example of a trusted source that is found high up on search results, allowing people to access accurate information when they need it most. We would welcome efforts to do the same for information relevant to democracy. 
  7. Such approaches should go hand-in-hand with education schemes that help people learn how to think critically about, and challenge, the information they see. It is important to draw a distinction between making sure people have access to accurate information that allows them to trust the system where it is worthy of trust, and encouraging blind faith when a certain degree of scepticism can be healthy.


10. What might be the best ways of reducing the effects of misinformation on social media internet companies? 

  1. We would first of all commend the committee for framing of the question to focus on reducing the effects of misinformation, rather than eradicating it. It is all too common for there to be a perception that we can, or should, be aiming at the latter, which risks encouraging over-reactions from governments and over-zealous policymaking. We should be aiming at a proportionate response that reduces harm and builds resilience to misinformation, and takes a platform-agnostic approach.
  2. Moreover, this policy needs to be developed through an open, transparent and democratic process, and – as we told the government in our response to the Online Harms White Paper – we do not feel that the currently outlined approach does this. In particular, we believe that the role of the proposed regulator is far too broad; that the White Paper does not pay due heed to the importance of the parliamentary process; and fails to fully address the risks associated with interference in free speech.
  3. One of the problems governments face in tackling this problem is the low standard of evidence in areas of this field, along with an evidence base skewed towards what is easiest to research. Our own research shows gaps in understanding between those looking at disinformation from a national security perspective and those starting from other perspectives, such as media, education and internet, as well as specific topics like public health. This leads to relatively isolated communities of expertise that could, and should, be more effective at working together and learning from one another. 
  4. We would urge more interdisciplinary work among these fields to identify gaps in evidence and share research methods and best practice, and the development of a comprehensive, useful, and accessible evidence base – perhaps led between UKRI, learned societies and other research institutions – to develop targeted cross-cutting research programmes.
  5. The government has proposed measures to mandate the use of fact checking services to address misinformation, especially in election periods. We are keen to do more of this work where it has a clear public benefit, but we would emphasise that the existence of high-quality independent fact checking organisations is not inevitable. Funding work like Full Fact’s is extremely difficult. 
  6. Not only would it be a mistake to mandate any company to work with organisations that do not exist, it also raises questions of how to ensure that new entrants do not have lower standards or a pure profit motive. This could undercut the work of Full Fact as a charity and the work of media outlets doing high-quality fact checking. Ultimately, this risks reducing the quality of fact checking done online, thus damaging the public’s trust in the process of fact checking.
  7. At the same time, as mentioned previously in this response, internet companies could help fact checking organisations by providing them with more information about the posts that gain the most traction on social media, and the effects that fact checks have on the spread of that information. Without this data, it is impossible to know which approaches to tackling misinformation work best.
  8. The Online Harms White Paper called for greater transparency from social media internet companies to address misinformation in a number of areas, and we are cautiously optimistic about a proposed focus on better and clearer reporting processes and increased expectations around monitoring and evaluation. 
  9. Full Fact’s work would also benefit from better access to trusted data from institutions, ideally in machine readable formats, so we can do quicker and more automated fact checking over time. 



11. How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish? 

  1. A crucial question here is who moderates the content. As a fact checking organisation, we are part of this effort and our views on the abilities of such organisations to effectively tackle misinformation are outlined above. 
  2. However, there is a limited number of high-quality, independent organisations in this space (we would recommend the committee look at the Code of Practice of the International Fact Checking Network, which sets out a minimal baseline standard of transparency, and its signatories). Done properly, fact checking is also a time-consuming process, and so technology companies are looking for ways to increase the scale of fact checking.
  3. In April, Facebook proposed plans for a “community review” process[xiv], which aims to allow users to point to journalistic sources to corroborate or contradict claims made in potentially false content. The company is in the process of consulting with fact checkers, among others, about the risks and benefits of this system. 
  4. We would be cautious about this approach until the specifics have been ironed out – it is true that there is the potential to involve more people in this work, but the success of this programme rests on who is allowed to carry out this review, and what effect this would have on the content they flag. Facebook would have to ensure it can prevent the system from being abused to limit free speech or attack certain groups or individuals. We would be happy to discuss this further or provide further evidence as we get more details.


Technology and democratic engagement 

12. How could the Government better support the positive work of civil society organisations using technology to facilitate engagement with democratic processes? 

  1. Full Fact is leading the field with our work on automated fact checking, through which we are developing tools to detect and check claims in real time, and use the results generated to spot and track trends over time. We recently won the Google AI for Social Good Impact Challenge along with our partners AfricaCheck, Chequeado, and the Open Data Institute.
  2. As such our answers to these questions will mainly focus on the possibility of using technology to benefit fact checking, but we would recommend as a minimum that basic democratic data on candidates, MPs, affiliations and polling station locations are created and supplied for use by third parties. At the moment, these are gathered by third parties; when Google and Facebook want to send reminders to people to vote they use unofficial data that has been crowdsourced by volunteers instead of official lists.
  3. We would urge the committee to look to other organisations for a fuller response on wider issues of the impact of technology and democratic engagement. In particular, we would commend the work done by Democracy Club, mySociety, doteveryone, and Operation Black Vote. 
  4. Regarding technology innovation in fact checking, we believe there needs to be a public benefit focus, scrutiny and accountability. Care needs to be taken about who is disadvantaged by the unintended consequences of the tools. There are not enough actors in this space who have this kind of approach.
  5. There are two areas Full Fact believes the government could invest in to encourage a step change in the quality of work in this area
  1. Review and support the development of better Natural Language Processing data and libraries for more languages, perhaps through the international development budget. Existing tools provide good support for some Western European languages, less support for Asian languages, and generally favour the languages of richer countries. This means that we are some way from being able to provide global technological solutions to what is a global problem. These limitations have hindered Full Fact’s and our international peers’ work in this area and are a key barrier to the wider rollout of tools we already have.
  2. Support fact checkers and domain experts to provide independent assessments and benchmarks of proposed technologies by probing their strengths and weaknesses against carefully-designed test inputs. This function would be analogous to the role that Euro NCAP, the keeper of the crash test dummies, plays for the car industry.
  1. We would also note that developing ground-breaking technology in a non-profit environment is very difficult. Traditional sources of tech funding are profit oriented and focused on probabilities across their portfolio. Non-profit funding is smaller, focused on individual projects and less able to judge tech projects. Reaching scale is therefore hard, even when, as in Full Fact’s case, you are recognised as leaders in your field.


13. How can elected representatives use technology to engage with the public in local and national decision making? What can Parliament and Government do to better use technology to support democratic engagement and ensure the efficacy of the democratic process? 

  1. If we are to ask elected representatives to use technology to better engage with the public, they should be given more support in using digital technology effectively. Doteveryone has done great work in this area, but these efforts should be built on by in-house support. There have already been good examples of this kind of work from the House of Commons. Similarly, we imagine that many staff across government would welcome from better training and support. 


14. What positive examples are there of technology being used to enhance democracy?

  1. We believe access to good information is a crucial part of democracy. We don’t expect people to make decisions based on facts alone, but we want people to be confident that they are not making a decision on the basis of a statement that may later turn out to be false, misleading or exaggerated. The internet allows Full Fact to serve millions of people every year.
  2. With a focus on fact checking, our approach is to identify solvable problems and develop technology to solve those specific issues, with the aim of using technology to scale, target, and dramatically increase the effectiveness of our work. We are already seeing the benefits of this, as our tools help us to live-check events like Prime Minister’s Questions and detect repeat claims that we can seek to correct.
  3. However, at the moment, we would be deeply sceptical about any technology claiming to be able to distinguish between trustworthy and untrustworthy information at scale in such a way that it could be used to promote or demote arbitrary content being shared online. Even if a technology appeared to deliver useful results, we would expect to find damaging unintended consequences when we scrutinised how it worked.
  4. Beyond this, we would again point to the excellent work done by the civic society organisations referred to earlier, many of which gather and record information that is used by many other organisations in this space.


  1. [i]
  1. [ii]
  1. [iii]
  1. [iv]
  1. [v]
  1. [vi]
  1. [vii]
  1. [viii]
  1. [ix]
  1. [x]
  1. [xi]
  1. [xii]
  1. [xiii]
  1. [xiv]