Open University Submission – written evidence (DAD0061)

 

Executive Summary

 

  1. The Open University (OU) welcomes the Committee’s inquiry into digital technologies. This written submission constitutes the informed take by OU research Harith Alani focused on the questions related to misinformation (Q2, 10, and 11 in the Call for Evidence).

 

  1. Based on the evidence from the OU’s reseach on misinformation, the following policy recommendations are made:

 

Background on The Open University

  1. The author of this submission, Harith Alani[1], is Professor of Web Science at the Knowledge Media institute, The Open University, where he leads the multidisciplinary Social Data Science group. His research is focused on studying various social phenomena on the Web to better understand their patterns and dynamics and to research, develop, and assess appropriate socio-technical solutions. He is currently the OU Principal Investigator on Co-Inform;[2] a €4M international research project funded by the European Commission to study and co-create tools for tackling online misinformation. 

 

Question 2: How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?

 

  1. These algorithms are designed specifically to help users find similar content to what they usually view or search for, and thus to strengthen and extend their engagement on the social media platform. A side effect of this is that the algorithms promote content which tends to strengthen users’ prior opinions, confirm their biases, and reduce their exposure to counter opinions. Therefore, the algorithms have an unintended effect in polarising users’ views.
  2. Designing algorithms that recommend related but countering opinions could be one way of tackling this issue, although these are more difficult to develop in some cases where opinions are not binary since the parameters by which content can be deemed countering are diverse (opposing opinions, alternative views, opinions from other groups, experts, cultures, etc.) and often are difficult to automatically determine.
  3. One undesirable consequence of creating countering content recommendation algorithms could be the recommendation of content that opposes facts, common sense, and our social and democratic values. There is evidence that social values impact the shareability of misinformation.[3] Hence the design of such AI algorithms needs to account for such parameters, to preserve and reinforce such values, and to finally move beyond the mere goal of engaging users at all costs; a strategy often followed by social media platforms. However, much further research is needed to create algorithms that are able to detect such values in online content, to take social values into account when recommending content, and to increase the explainability of their output to users.

Question 10. What might be the best ways of reducing the effects of misinformation on social media platforms?

  1. Social media has no doubt created a haven for the rapid spread of misinformation. Multiple mechanisms are required to reduce this highly damaging phenomenon, all of which require much further research and development. This includes AI tools to:

 

  1. Although many social media platforms are researching and experimenting with some of the ideas above, their methods tend to be non-transparent, and their attention is often more focused on specific issues and topics (e.g., US politics, Russian propaganda, and certain types of extremism).

 

  1. Much research has been carried out in recent years to develop AI algorithms and tools for detecting misinformation and to understand their dynamics. Nevertheless, such technologies are often designed and developed with very little involvement of end-users and other stakeholders.[5]

 

  1.          Using registered fact checkers’ expertise, opinion, and verification outcomes is certainly beneficial to acquire appropriate and trusted assessments of information. These assessments are often published by the fact checkers in the form of articles on their websites, as well as in machine-processible format using the ClaimReview schema,[6] which enables search engines to easily recognise it and to use it in their search results. However, there are a number of related challenges that hinder the use of such output for reducing misinformation on social media, such as:

 

 

Question 11. How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish?

 

  1. There is no doubt that social media technology companies can do more to tackle the problem, and there is good evidence that some of them (especially the biggest companies who are the ones under the most pressure) are taking many positive steps in that direction. However, it is also important to acknowledge that the solution to this problem is extremely complex, and many of its relevant socio-technical issues remain to be not well understood. Further, open, and well organised support and collaboration with researchers in academia and beyond is necessary to establish such solutions.
  2. Currently, the pressure to moderate is unequally applied to some technology companies and not to others. Furthermore, while the pressure is increasing on social media companies, there does not seem to be any similar pressures applied to traditional media. It is well known that some traditional media regularly misrepresent information, wrap facts with many layers of subjective or unverifiable information, or author information in a carefully chosen way to reinforce certain biased views. It is therefore unclear how impactful the regulation of social media would be when other media can publish misrepresented information regularly. There is currently very little research on understanding and measuring this impact and on tracking where misinformation originates and how it propagates across various media platforms.
  3. There is evidence that some influencing politicians in the UK frequently spread information on social media that was published by unreliable and biased sources. If politicians, journalists, and media organisations, are not put under similar moderation processes and pressures, are not held accountable to sharing misinformation, then it is unclear how effective our efforts will be in tackling misinformation at the societal level.

 

 

 

About The Open University

 

  1.          The OU’s mission is to be Open to people, places, methods and ideas. For most of our undergraduate qualifications there are no academic entry requirements. We believe students should have the opportunity to succeed irrespective of their previous experiences of education.

 

  1.          The OU operates across all four nations of the UK and has 175,000 students. We teach four in ten part-time UK undergraduates (41%).

 

  1.          The OU is a world leader in distance learning. Our undergraduates do not attend a campus; they live in their own homes throughout the UK.  One in five of our first-year undergraduates study at full-time intensity, a proportion that has almost doubled since 2012/13. 

 

  1.          In this year’s National Student Survey, overall satisfaction with the OU remains at 87%, keeping the OU in the top 20 of UK universities. The OU continues to rank first for assessment and feedback.

 

  1.          There is no typical OU student. People of all ages and backgrounds study with us and for many reasons – to update their skills, get a qualification, boost their career, change direction, prove themselves or keep mentally active. 

 

 

4

 


[1] Professor Harith Alani, http://stem.open.ac.uk/people/ha2294

[2] H2020 Co-Inform project https://coinform.eu/

[3] Farrell, Tracie; Piccolo, Lara; Coppolino Perfumi, Serena; Mensio, Martino; and Alani, Harith (2019). Understanding the Role of Human Values in the Spread of Misinformation. Conference for Truth And Trust Online (TTO), London, UK. http://oro.open.ac.uk/67033/

[4] Mensio, Martino and Alani, Harith (2019). MisinfoMe: Who’s Interacting with Misinformation? In: 18th International Semantic Web Conference (ISWC 2019), Auckland, New Zeeland. http://oro.open.ac.uk/66341/

[5] Fernandez, Miriam and Alani, Harith (2018). Online Misinformation: Challenges and Future Directions. In: WWW ’18 Companion: The 2018 Web Conference Companion, Lyon, France. http://oro.open.ac.uk/53734/

[6] Claim review mark-up schema to structure fact checking data https://schema.org/ClaimReview

[7] Mensio, Martino and Alani, Harith (2019). News Source Credibility in the Eyes of Different Assessors. Conference for Truth and Trust Online (TTO), London, UK. http://oro.open.ac.uk/62771/