- The Open University (OU) welcomes the Committee’s inquiry into digital technologies. This written submission constitutes the informed take by OU research Harith Alani focused on the questions related to misinformation (Q2, 10, and 11 in the Call for Evidence).
- Based on the evidence from the OU’s reseach on misinformation, the following policy recommendations are made:
- Support interdisciplinary research on misinformation, to produce easy to use, accurate, unbiased, transparent, co-created, and explainable AI algorithms and tools to reduce the spread and impact of misinformation;
- Support research on automated methods to determine the social values associated with, or targetted by, misinformation;
- Invest in the development of tools and campaigns to raise people’s awareness to their consumption of, and exposure to, misinformation;
- Review the publishing of manipulated information by traditional media and public figures and its correlation with misinformation propagation on social media;
- Support the research and development of mechanisms to leverage the outcomes of legitimate fact checkers to curb the spread of misinformation on social media platforms.
Background on The Open University
- The author of this submission, Harith Alani, is Professor of Web Science at the Knowledge Media institute, The Open University, where he leads the multidisciplinary Social Data Science group. His research is focused on studying various social phenomena on the Web to better understand their patterns and dynamics and to research, develop, and assess appropriate socio-technical solutions. He is currently the OU Principal Investigator on Co-Inform; a €4M international research project funded by the European Commission to study and co-create tools for tackling online misinformation.
Question 2: How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?
- These algorithms are designed specifically to help users find similar content to what they usually view or search for, and thus to strengthen and extend their engagement on the social media platform. A side effect of this is that the algorithms promote content which tends to strengthen users’ prior opinions, confirm their biases, and reduce their exposure to counter opinions. Therefore, the algorithms have an unintended effect in polarising users’ views.
- Designing algorithms that recommend related but countering opinions could be one way of tackling this issue, although these are more difficult to develop in some cases where opinions are not binary since the parameters by which content can be deemed countering are diverse (opposing opinions, alternative views, opinions from other groups, experts, cultures, etc.) and often are difficult to automatically determine.
- One undesirable consequence of creating countering content recommendation algorithms could be the recommendation of content that opposes facts, common sense, and our social and democratic values. There is evidence that social values impact the shareability of misinformation. Hence the design of such AI algorithms needs to account for such parameters, to preserve and reinforce such values, and to finally move beyond the mere goal of engaging users at all costs; a strategy often followed by social media platforms. However, much further research is needed to create algorithms that are able to detect such values in online content, to take social values into account when recommending content, and to increase the explainability of their output to users.
Question 10. What might be the best ways of reducing the effects of misinformation on social media platforms?
- Social media has no doubt created a haven for the rapid spread of misinformation. Multiple mechanisms are required to reduce this highly damaging phenomenon, all of which require much further research and development. This includes AI tools to:
- Detect and remove or demote misinforming content;
- Match and promote content from fact-checkers when and where relevant;
- Raise the awareness of individual users to their own interactions and exposure to misinformation in their own social networks;
- Provide users with embedded tools to quickly, easily, and reliably verify information, to detect, correct, and perhaps block social media group pages that spread harmful misinformation;
- Discourage users from uploading and/or sharing misinforming content.
- Although many social media platforms are researching and experimenting with some of the ideas above, their methods tend to be non-transparent, and their attention is often more focused on specific issues and topics (e.g., US politics, Russian propaganda, and certain types of extremism).
- Much research has been carried out in recent years to develop AI algorithms and tools for detecting misinformation and to understand their dynamics. Nevertheless, such technologies are often designed and developed with very little involvement of end-users and other stakeholders.
- Using registered fact checkers’ expertise, opinion, and verification outcomes is certainly beneficial to acquire appropriate and trusted assessments of information. These assessments are often published by the fact checkers in the form of articles on their websites, as well as in machine-processible format using the ClaimReview schema, which enables search engines to easily recognise it and to use it in their search results. However, there are a number of related challenges that hinder the use of such output for reducing misinformation on social media, such as:
Question 11. How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish?
- There is no doubt that social media technology companies can do more to tackle the problem, and there is good evidence that some of them (especially the biggest companies who are the ones under the most pressure) are taking many positive steps in that direction. However, it is also important to acknowledge that the solution to this problem is extremely complex, and many of its relevant socio-technical issues remain to be not well understood. Further, open, and well organised support and collaboration with researchers in academia and beyond is necessary to establish such solutions.
- Currently, the pressure to moderate is unequally applied to some technology companies and not to others. Furthermore, while the pressure is increasing on social media companies, there does not seem to be any similar pressures applied to traditional media. It is well known that some traditional media regularly misrepresent information, wrap facts with many layers of subjective or unverifiable information, or author information in a carefully chosen way to reinforce certain biased views. It is therefore unclear how impactful the regulation of social media would be when other media can publish misrepresented information regularly. There is currently very little research on understanding and measuring this impact and on tracking where misinformation originates and how it propagates across various media platforms.
- There is evidence that some influencing politicians in the UK frequently spread information on social media that was published by unreliable and biased sources. If politicians, journalists, and media organisations, are not put under similar moderation processes and pressures, are not held accountable to sharing misinformation, then it is unclear how effective our efforts will be in tackling misinformation at the societal level.
About The Open University
- The OU’s mission is to be Open to people, places, methods and ideas. For most of our undergraduate qualifications there are no academic entry requirements. We believe students should have the opportunity to succeed irrespective of their previous experiences of education.
- The OU operates across all four nations of the UK and has 175,000 students. We teach four in ten part-time UK undergraduates (41%).
- The OU is a world leader in distance learning. Our undergraduates do not attend a campus; they live in their own homes throughout the UK. One in five of our first-year undergraduates study at full-time intensity, a proportion that has almost doubled since 2012/13.
- In this year’s National Student Survey, overall satisfaction with the OU remains at 87%, keeping the OU in the top 20 of UK universities. The OU continues to rank first for assessment and feedback.
- There is no typical OU student. People of all ages and backgrounds study with us and for many reasons – to update their skills, get a qualification, boost their career, change direction, prove themselves or keep mentally active.
- 76% of our directly-registered students work full or part-time
- 23% of our undergraduates live in the 25% most deprived areas
- 24,709 students with disabilities studied with us in 2017/18
- 33% of our students begin their studies with 1 A Level or less.
 Professor Harith Alani, http://stem.open.ac.uk/people/ha2294
 H2020 Co-Inform project https://coinform.eu/
 Farrell, Tracie; Piccolo, Lara; Coppolino Perfumi, Serena; Mensio, Martino; and Alani, Harith (2019). Understanding the Role of Human Values in the Spread of Misinformation. Conference for Truth And Trust Online (TTO), London, UK. http://oro.open.ac.uk/67033/
 Mensio, Martino and Alani, Harith (2019). MisinfoMe: Who’s Interacting with Misinformation? In: 18th International Semantic Web Conference (ISWC 2019), Auckland, New Zeeland. http://oro.open.ac.uk/66341/
 Fernandez, Miriam and Alani, Harith (2018). Online Misinformation: Challenges and Future Directions. In: WWW ’18 Companion: The 2018 Web Conference Companion, Lyon, France. http://oro.open.ac.uk/53734/
 Claim review mark-up schema to structure fact checking data https://schema.org/ClaimReview
 Mensio, Martino and Alani, Harith (2019). News Source Credibility in the Eyes of Different Assessors. Conference for Truth and Trust Online (TTO), London, UK. http://oro.open.ac.uk/62771/