Hadley Newmanwritten evidence (FOJ0061)

 

House of Lords Select Committee on Communications and Digital. Future of journalism inquiry

 

Submission in response to Question 4:

How have digital technologies changed the production of journalism?

 

Advances in Automated News Production

 

Hadley Newman is a consultant and senior director with Publicis Groupe, a global leader in marketing and communications. He advises government and business leaders on the strategic role that digital communication technologies can play in strengthening audience engagement, influencing behaviours, and adapting perceptions. His international experience in targeted information activity and the strategic use of technology includes segmentation, social listening, network and sentiment analyses, and programmatic targeting for reputation management and citizen engagement. As a sociologist, Hadley is interested in how social media algorithms can be used to manipulate public opinion, his doctoral research is focused on political communication.

 

 

Abstract

 

Journalism – both as a profession and an industry – has been transformed by digital technology. In its halcyon days the fourth estate was a bastion of democracy, one that possessed an unrivalled capacity for advocating for transparency, accuracy, and disseminating information on issues of national and international significance. Journalists were trusted for their impartiality, apolitical coverage of stories, and truthfulness. Now, reputable news outlets are relying as much on bots and stories generated by algorithms as traditional journalistic skills. While this has its advantages, bots used in the 2016 United States presidential election, the 2017 German federal election, and the 2016 United Kingdom European Union membership referendum influenced, if not manipulated, public opinion and outcomes.

 

Automating news production offers new possibilities for creating content at scale, personalising information with a low cost of adaptation, and covering events more quickly than traditionl methods. Bots, or, more precisely, the agents behind them, also have a reach that far exceeds traditional print media.

 

Following a review and analysis of the sociological research on this topic, this report makes two key assertions: (1) there is significant evidence that bots pose a threat to public communication (journalism); however, (2) algorithms and automation can also be used to improve the quality of journalism, news production, and public understanding of current events.

 

 

Keywords: disinformation, manipulation, infodemic, computational propaganda


Introduction and Background

 

With the advent of Web 2.0 in 2004, the ubiquity of social media, and the accessibility of smartphones, tablets, and other portable digital devices, news consumption increased exponentially (Maniou & Veglis, 2016). To keep pace with the rate of consumption, new forms of journalism arose to streamline, even automate, the writing process. This has had a profound impact on journalistic narratives, content, and professional communication practices. The evidence presented in this report points to the rise and use of bots as the most significant change in the production of journalism in recent times.

 

Communication and content creation bots have come to be regarded as computer programs that were purpose-built to perform automated tasks. Typically, they are used to collate, deconstruct, reconstruct, and share news through a range of social media outlets and other channels. Often, this is news that would have otherwise gone unpublished, for better or worse

 

In terms of structure, this report is divided into four sections. It begins by describing the evolution and eventual prevalence of automation in generating news. It then explores the ethical issues of how the general public (the ‘majority’) consumes journalistic content created by bots (the ‘minority’). Reflecting on the evidence, two disticinct but interrelated conclusions are offered (a) there is significant evidence that bots pose a threat to public communication, (b) algorithms and automation can also be used to improve the quality of journalism, news production, and public understanding of current events. Finally, the report concludes that there is an urgent need for questions to be asked about the rise and use of bots in public communications through the establishment of an all-party parliamentary group on communications in the United Kingdom. This would provide a non-biased forum for parliamentarians, educators, and employers to develop a greater understanding of how the digital transformation has changed journalistic practices, the industry, and how the public acts based on the news they receive.

 

Journalism and Automated News

 

  1. The British news and media website, TheGuardian.com (formerly known as Guardian.co.uk) has been using bots to generate news content for at least a decade. In 2010, the platform focused on automated sports news. Essentially, journalists were able to obtain statistics from games as well as historical information on individual players and teams. After combining this data with prewritten phrases, staff members could automatically compose stories (Bunz, 2010). The website’s leadership team did not stop there, however:

 

 

 

 

Industry-wide, bots began surpassing journalists’ output by 2014. The Washington Post reported that its own ‘robot reporter’ had published 850 articles within a 12-month period (Miller, 2015). Putting this into perspective, it is questionable as to whether a seasoned reporter could ever reach that level of output. It stands to reason that as bots continue to improve their performance, editors will become increasingly reliant on them. 

 

  1. Based on the experiences of the staff at TheGuardian.com and the transparency of The Washington Post, it is clear that journalism and the communication industry generally are undergoing a paradigm shift. This is perhaps most evident in the training and capacity building efforts of university departments, who are wondering how to best prepare future generations of journalists for on-the-job demands and the nature of their work (Pavlik, 2013).  Experienced journalists tend to view recent graduates as possessing advanced skills in technology but lacking in the traditional skills the industry was founded upon (Pierce & Miller, 2007). Above all, new journalists require robust interviewing and critical thinking skills, not to mention the ability to discern when information is newsworthy (Ferrucci, 2018). 

 

  1. Yet, these skills are disappearing from the curriculum in favour of “algorithmic, social-scientific, and mathematical processes and systems for the production of news” (Young & Hermida 2015, 381). These technological innovations, in particular algorithmic advances and automation, allow news stories to be created and disseminated faster than what is humanly possible (Cohen, 2015). So much so, in fact, that these rapidly-produced stories are often dubbed ‘churnalism’ (Van Hout & Van Leuven, 2017)highlighting the careful balance that must be struck between new technologies and traditional skills.

 

Bots, Automated News, and Political Communication

 

  1. In 2014, Venezuela’s government officials used bots to enhance their social media impact.  Although they began by inflating their numbers to grow their followers, their social media representatives quickly switched tactics to retweeting their own commentary and announcements (Biddle et al., 2015). It is estimated that 2,500 of the accounts retweeting President Maduro’s comments were bots (Perez, 2014). While this was the first recorded instance of automatically disseminated news, the technique was quickly copied and reproduced. As early as 2015, extensive use of autonomous bot-based political lobbying tactics were recorded from Russia, Mexico, China, Australia, the United Kingdom, the United States, Azerbaijan, Iran, Bahrain, South Korea, Turkey, Saudi Arabia, and Morocco – constituting 60% of all online activity (Bastos & Mercea, 2019).

 

  1. Over the past five years, researchers have investigated the role of bots in automated news and political communication (Bastos & Mercea, 2019). This topic has also received extensive press coverage (Silva, 2016) focused on the implications for political communication, democracy, and elections (Shorey & Howard, 2016; Silva, 2016; Woolley & Howard, 2016). The scale of bot deployment is a concern in political news (Bessi & Ferrara, 2016), with previous research reporting that, perhaps not surprisingly, bots are most common in polarised political discussions (Ferrara et al., 2016).

 

  1. In the 2016 United States presidential election, Donald Trump won the ‘Twitter Debate’ based on his campaign team’s strategic use of bots (Bessi & Ferrara, 2016; Silva, 2016). In more detail, bots are social media accounts that exist on various networks that send automatically-generated content and interact with other social media accounts and individual users. Their creators strive to maintain the appearance of human-managed accounts by establishing a profile and replicating tone of voice and typical communication patterns (Beskow & Carley, 2018). It has been widely reported that 19 million bot accounts were used to tweet news for Donald Trump and Hillary Clinton in the week leading up to the election. The Trump campaign dominated artificially generated news, with his bots outnumbering Clinton’s by 5:1 (Michael, 2017; Shu et al., 2017). Pro-Trump bots worked in an insidious (or efficient way, depending on your perspective) to influence public opinion by surreptitiously hijacking pro-Clinton hashtags like #ImWithHer and propagating fake news stories (Byrnes, 2016).

 

  1. More recently, a report issued by the Reuters Institute for the Study of Journalism at Oxford University (Nielsen et al., 2020) explored how people accessed and rated news and information about COVID-19 in six countries, including the UK, in the early stages of the pandemic (31 March to 7 April 2020). The purpose of the report was to offer an analysis that would “be useful to journalists, news media, platform companies, public authorities, and citizens as they think about the flow of news and information around coronavirus”. Overall, Nielsen and colleagues documented how much misinformation people claim they encounter from different news sources and platforms and were able to show that the use of new has increased and to get news about COVID-19 people were using social media and video sites and approximately a third of the respondents on average said that “they have seen a lot or a great deal of … misinformation in the last week” (Nielsen et al., 2020).

 

  1. On a related note, issues surrounding disinformation about COVID-19 are highlighted by Collins (2020) who argued that it should be “ an offence for people to knowingly, maliciously, and, at scale, spread disinformation about COVID-19 with the intent of harming public health”. Further, he suggested that big tech companies will only “take more effective action against the source of it, and the means by which it is being distributed” if disinformation is regarded as a legal offence. Indeed, Collins observes that “the ‘infodemic’ around the coronavirus also demonstrates, like the anti-vaccine movement before it, the risk of physical harm that can come from online disinformation”. Easily produced and accessible pseudo-journalism is having a tangible, and in some cases detrimental, effect on human life and well-being. At the same time, it is worth noting that in this evolving pandemic, most of what we learn about COVID-19 is difficult to clearly and neatly separate into information and disinformation, true and false, reliable and unreliable (Brennen et al., 2020).

 

  1. Even as far back as 2017, a study into that years’ federal election in Germany found that political conversation on Twitter did not correspond with the polls. There were two issues of note. Compared to their popularity in the polls, the right-wing opposition party, AfD was disproportionally dominant on Twitter with most of the bots working in their favour. More broadly, German social media users shared links to political news and information over junk news by a ratio of 4:1 (Neudert et al., 2017).

 

Ethical Issues: The manipulation of the majority by a minority

 

  1. There is a clear ethical distinction between misinformation and disinformation. Even though they are each sources of misleading and inaccurate information, they are conceptually distinguished based on authorial intent.Misinformation is the inadvertent sharing of false information. By contrast, “disinformation” is the deliberate creation and sharing of information that is known to be false with the aim of causing confusion or leading the recipient to believe a lie (DiResta, 2018). Computational propaganda is the manipulation of public life through the use of data analytics, algorithms and automation (Woolley & Howard, 2016 p.4886). Used in modern disinformation campaigns (Woolley & Howard, 2018), computational propaganda presents multiple issues for social media platforms including the digital challenges of sustainable journalism models (Bradshaw & Neudert, 2018 p. 5) which the platforms responded to with various initiatives in an effort to strengthen quality journalism. The initiatives included financial support for training journalists and, in the case of Google, the employment of journalists in several countries to monitor misinformation during the lead-in to elections (Taylor and Hoffmann, 2019).

 

  1. There are many ethical issues associated with the use of bots (as a minority force) in news production, not least that bots now have the power, capability, and means to manipulate the news viewed by the public (the majority, and recipients) for very little cost, minimal effort, and with an incredible reach. News-creating bots have played a major role in the weaponisation of news on social media platforms, as they are an easy to use, cost-effective, and agile (Confessore et al., 2018) way to tap into the social media attention economy (Tufekci, 2013). Diverse organisations, including NATO (Bertolin et al., 2017), Data & Society Research Institute (Marwick & Lewis, 2017), Facebook (Weedon et al., 2017), and the Canadian Security Intelligence Service (2018) have all issued reports exploring news creation and communication warfare on social media.

 

  1. Automated news can be used as a propaganda mechanism to draw attention to or away from political news. In their 2016 report, professors at Harvard, Stanford, and the University of California, San Diego estimated that the Chinese government had created 488 million media comments annually, which had been used to control the narrative around political issues (Oster, 2016). In addition to creating comments, they actively deleted messages that did not conform to their agenda. Within a three-month period, Sina Weibo (the Chinese equivalent of Twitter) saw 13% of its user content openly deleted by government officials. In general, the deleted information included the terms ‘Tibet’, ‘Falun Gong’, and ‘democracy’, or what officials deemed politically charged messages (King et al., 2017).

 

  1. During the 2011 Arab Spring, journalists were empowered to give eyewitness accounts of events in real time through Twitter, using the hashtags #Syria, #Daraa, and #Mar15 to amplify their messages (York, 2011). However, these journalists were threatened online, as one blogger noted, “These accounts were believed to be manned by Syrian intelligence agents with a poor command of both written Arabic and English, and an endless arsenal of bile and insults” (Michael, 2017). When the journalists did not succumb to these threats, bots that were created by the Bahrain-based company, EGHNA operated pro-regime accounts that were able to outrank the journalists’ work and replace it with acceptable photography of Syria (York, 2011). While the journalists did their best to adapt to the new technologies, the regimes were better funded and resourced. In fact, EGHNA even noted on their website (under the category ‘Success Stories’) how much they had achieved in terms of public opinion by promoting Syria’s beauty.

 

  1. Following the Arab Spring, Iran attempted to gain dominance over Saudi Arabia through the use of social media to broaden its influence across the Arab states (Zweiri, 2016). Since 2018, the country’s political leaders have created and operated news websites to promote their political agendas with a particular focus on criticising Saudi Arabia and supporting Syria’s President, Bashar al-Assad. That being said, the Iranian origins of the sites have remained well-concealed and would not be apparent to Arab users (Elswah et al., 2019).

 

  1. Furthermore, the Islamic State terror group (ISIS) ‘ghost tweeted’ its propaganda from automated accounts to make it appear as though it had a large, sympathetic following (Woolley 2014). This took the form of an international news campaign that aimed to attract personnel and funding from a global audience. The fake news that they generated and promulgated included stories of mass killings of Iraqi soldiers. On one occasion they reported 1700 deaths when there had only been 11 (Nordland & Rubin, 2014).

 

 

Conclusion

 

  1. This brief, evidence-based report makes two distinct but interrelated points: (1) there is significant evidence that bots pose a threat to public communication (journalism); however, (2) algorithms and automation can also be used to improve the quality of journalism, news production, and public understanding of current events. The literature highlights the fundamental change in the nature of journalistic practices and the industry brought about by technologies and the ubiquitousness of social media. In discussing artificially created news, Napoli (2015) raised an important point about the extent to which social media platforms and the algorithms behind them reflect the public’s interests and values. For example, ISIS employs the same media strategies as some recognised governments, mainstream political parties, and social activists.

 

  1. During the United States 2016 presidential election and the 2017 German federal election, bots were considered to have influenced (or even manipulated) public opinion through disinformation and misinformation. That being said, compared to traditional journalism, automating the production of news and information offers new possibilities for creating content at a high speed and scale, with more personalisation and a relatively low cost of production. Undeniably, the agents behind bots have an extensive reach that far exceeds traditional print media. Furthermore, as was seen during the Arab Spring and more recently in Iran and China, automated journalism can change (or at least disguise) the narrative and flood political public discourse with counter-arguments, claims, or even suppress a particular topic by inundating social media users with irrelevant content. Finally, the threat posed by disinformation makes the journalists' job more complex and presents a heightened risk of sharing misinformation.

 

  1. For reasons not limited to these, an all-party parliamentary group (APPG) on communications is required to raise awareness and understanding of the threat that bots and nefarious users pose to the integrity of democracy and to professional public communications from mis/disinformation and computational propaganda. This is particularly important during times of national crisis, such as the COVID-19 pandemic. The formation of an APPG on communications will provide a parliamentary platform to identify and debate emerging issues, disseminate knowledge, and facilitate engagement between industry, academia and Parliamentarians regarding contemporary issues and concerns associated with communication. Open to all members of both Houses, the APPG on Communications would provide a valuable opportunity for parliamentarians to engage with individuals and organisations outside Parliament on issues related to communications and technologies. Further, in support of the communications industry, the committee should aim to:

 

 

 

 

 

 

 

April 2020

REFERENCES

 

Bastos, M.T. and Mercea, D. (2019). The Brexit botnet and user-generated hyperpartisan news.  Social Science Computer Review, 37(1): 38-54.

Bertolin G, Agarwal N, Bandeli K, Biteniece N. & Sedova K. (2017). Digital Hydra: security implications of false information online th(ed. G Bertolin). Riga, Latvia: NATO Strategic Communications Centre of Excellence. https://www.stratcomcoe.org/digital-hydra-security-implications-false-information-online

Beskow, D.M., & Carley, K.M. (2018). Bot conversations are different: leveraging network metrics for bot detection in twitter. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 825-832). IEEE.

Bessi, A. & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First Monday, 21(11).

Biddle, E.R., Diaz, M., Li, W., Lim, H. & West, S.M. (2015). Netizen Report: Leaked Documents Reveal Egregious Abuse of Power by Venezuela in Twitter Arrests. Global Voices Advocacy. https://advox.globalvoices.org/2015/07/15/netizen-report-leaked-documents-reveal-egregious-abuse-of-power-by-venezuela-in-twitter-arrests/

Bradshaw, S., & Neudert, L.-M. (2018). Government responses to social media manipulation. Computational Propaganda Project Working Paper

Bunz, M. (2010). In the US, algorithms are already reporting the news. The Guardian, 30 March. https://www.theguardian.com/media/pda/2010/mar/30/ digital-media-algorithms-reporting-journalism

Byrnes, N. (2016). How the bot-y politic influenced this election.  MIT Technology Review [online] Available: https://www.technologyreview.com/s/602817/how-the-bot-y-politic-influenced-this-election/

Canadian Security Intelligence Service (2018). Who said what? The security challenges of modern disinformation. World Watch: Expert Notes Series Ottawa, Canada: Canadian Security Intelligence Service/ https://www.canada.ca/content/dam/csis-scrs/documents/publications/disinformation_post-report_eng.pdf

Cohen, N. (2015). From Pink Slips to Pink Slime. The Communication Review 18(2): 98–122.

Collins, D. (2020) We must do more to police the spread of fake news and disinformation about Covid-19. The House. https://www.politicshome.com/thehouse/article/we-must-do-more-to-police-the-spread-of-fake-news-and-disinformation-about-covid19

Confessore, N., Dance, G., Harris, R. & Hansen, M. (2018). The follower factory. The New York Times, 27 January. https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html

DiResta, R. (2018). Computational propaganda: If you make it trend, you make it true. The Yale Review, 106(4): 12-29.

Dörr, K. (2015). Mapping the Field of Algorithmic Journalism. Digital Journalism, 1–24. https://doi.org/10.1080/21670811.2015.1096748

Elswah, M., Howard, P.N. & Narayanan, V. (2019). Iranian digital Interference in the Arab World. Data Memo. Project on Computational Propaganda, Oxford, United Kingdom, 1850-1867.

Emily Taylor & Stacie Hoffman, “Industry responses to computational propaganda and social media manipulation.” Working Paper 2019.4. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk. 48 pp. [online] Available at: https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/11/Industry-Responses-Walsh-Hoffmann.pdf.

Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. (2016). The Rise of Social Bots. Communications of the ACM, 59(7): 96-104. doi: 10.1145/2818717

Ferrucci, P. (2018). “We’ve Lost the Basics”: Perceptions of Journalism Education From Veterans in the Field. Journalism & Mass Communication Educator, 73(4): 410-420.

Gani, A. & Haddou, L. (2014). Could robots be the journalists of the future? The Guardian, 16 March. https://www.theguardian.com/media/shortcuts/2014/mar/ 16/could-robots-be-journalist-of-future

Good, N. & Wilk, C. (2016). Introducing the Guardian Chatbot. Inside the Guardian blog. https://www.theguardian.com/help/insideguardian/2016/nov/ 07/introducing-the-guardian

King, G., Pan, J. & Roberts, M.E. (2017) How the Chinese Government fabricates social media posts for strategic distraction, not engaged argument.  American Political Science Review, 3(3): 484-501.

Maniou, T. & Veglis, A. (2016). Selfie Journalism: Current Practices in Digital Media. Studies in Media and Communication, 4(1): 111-118. https://doi.org/10.11114/smc.v4i1.1637

Marwick, A. & Lewis, R. (2017). Media manipulation and disinformation online. New York, NY: Data & Society Research Institute. https://datasociety.net/output/media-manipulation-and-disinfo-online

Michael, K. (2017) Bots Trending Now: Disinformation and Calculated Manipulation of the Masses. https://technologyandsociety.org/bots-trending-now-disinformation-calculated-manipulation-masses/

Napoli, P.M. (2015). Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9): 751-760.

Narayanan, V., Howard, P.N., Kollanyi, B. & Elswah, M. (2017). Russian involvement and junk news during Brexit. Computational Propaganda Project, Data Memo, (2017.10).

Neudert, L., Kollanyi, B. & Howard, P.N. (2017). Junk news and bots during the German parliamentary election: What are German voters sharing over twitter? (pp. 1–6). The Computational Propaganda Project. University of Oxford

Newman, N., Fletcher, R., Kalogeropoulous, A., Nielsen, R.K. 2019. Reuters Institute Digital News Report 2019. Oxford: Reuters Institute for the Study of Journalism.

Nielsen, R.K., & Graves, L. (2017). "News you don't believe": Audience perspectives on fake news. Oxford: Reuters Institute for the Study of Journalism.

Nielsen, R.K., Fletcher, R., Newman, N., Brennen, J.S. & Howard, P.N. (2020) Navigating the ‘Infodemic’: How People in Six Countries Access and Rate News and Information about Coronavirus. Misinformation, Science, And Media. Report published by the Reuters Institute for the Study of Journalism.

Nordland, R. & Rubin, A.J. (2014) Massacre claim shakes Iraq, New York Times, [online] Available: https://www.nytimes.com/2014/06/16/world/middleeast/iraq.html

Oster, S. (2016) China fakes 488 million social media posts a year: Study. Bloomberg News, 2016; https://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism

Pavlik, J.V. (2013). Innovation and the future of journalism. Digital Journalism, 1(2): 181-193.

Perez, Y. (2014) ¡Al Descubierto! Los Robots Retuiteadores de Nicolás Maduro (UPS)  La Patilla, 14-Jul-2014.

Pierce, T., & Miller, T. (2007). Basic journalism skills remain important in hiring. Newspaper Research Journal, 28(4), 51-61.

Sanchez-Gonsales, H. & Sanchez-Gonzalez, M. (2017). Bots as a news service and its emotional connections to audiences: The case of Politibot. Doxa Comunicación: Revista interdisciplinar de estudios de Comunicación y Ciencias Sociales, 25, 63-84.

Shorey, S. & Howard, P.N. (2016). Automation, Big Data and Politics: A Research Review. International Journal of Communication, 10, 5032–5055.

Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1), 22-36.

Silva, S. (2016). Trump's Twitter Debate Lead Was 'Swelled by Bots' http://www.bbc.co.uk/news/technology-37684418

Slaček Brlek, S., Smrke, J. & Vobič, I. (2017). Engineering Technologies for Journalism In the Digital Age: A case study. Digital Journalism, 5(8): 1025-1043.

Tufekci, Z. (2013). ‘Not this one’: social movements, the attention economy, and microcelebrity networked activism. American Behavioral Science 5: 848–870.

Van Hout, T. & Van Leuven, S. (2017). Investigating ‘Churnalism’ in Real-time News. In The Routledge Companion to Digital Journalism Studies, edited by B. Franklin and S.A. Eldridge II, 117–125. London: Routledge.

Weedon J, Nuland W, & Stamos A. (2017). Information operations and Facebook. Menlo Park, CA: Facebook. https://fbnewsroomus.files.wordpress.com/2017/04/facebook-and-information-operations-v1.pdf

Woolley, S. C., & Howard, P. N. (2016). Automation, algorithms, and politics: Political communication, computational propaganda, and autonomous agents – Introduction. International Journal of Communication, 10, 9.

Woolley, S.C. & Howard, P.N. (2018) Computational propaganda: political parties, politicians, and political manipulation on social media. Oxford: Oxford University Press.

Woolley, S.C. (2014) Spammers, Scammers, And Trolls: Political Bot Manipulation. https://www.oii.ox.ac.uk/blog/spammers-scammers-and-trolls-political-bot-manipulation/

Woolley, S.C. (2016). Automating power: Social bot interference in global politics. First Monday, 21(4). https://doi.org/10.5210/fm.v21i4.6161

York, J.C. (2011) Syria's Twitter Spambots.  The Guardian, [online] Available: https://www.theguardian.com/commentisfree/2011/apr/21/syria-twitter-spambots-pro-revolution.

Young, M.L. & Hermida, A. (2015). From Mr. and Mrs. Outliner to Central Tendencies. Digital Journalism 3(3): 381–397.

Zweiri, M. (2016) Iran and Political Dynamism in the Arab World: the Case of Yemen. Digital Middle East Studies 25(1): 4–18.

 

 

 

 

 

 

 

 

9