Digital, Culture, Media and Sport Committee
Oral evidence: Influencer culture, HC 258
Thursday 25 November 2021
Ordered by the House of Commons to be published on 25 November 2021.
Members present: Julian Knight (Chair); Kevin Brennan; Clive Efford; Julie Elliott.
Questions 299 - 362
Witnesses
I: Dr Stephanie Alice Baker, Senior Lecturer in Sociology, City, University of London; Dr Giovanni De Gregorio, postdoctoral researcher at the Centre for Socio-Legal Studies, University of Oxford; and Abbie Richards, Science communicator.
II: Dr Robyn Caplan, Senior Researcher at Data and Society Research Institute; Becca Lewis, Graduate fellow and PhD candidate at Stanford University; and Sara McCorquodale, Chief Executive and founder of CORQ.
Witnesses: Dr Stephanie Alice Baker, Dr Giovanni De Gregorio and Abbie Richards.
Q299 Chair: This is the Digital, Culture, Media and Sport Select Committee and our latest hearing into influencer culture. We have two panels before us today. Our first panel consists of Dr Stephanie Alice Baker, senior lecturer in sociology at the City University of London, Dr Giovanni De Gregorio, postdoctoral researcher at the Centre for Socio-Legal Studies at the University of Oxford and, via Zoom, Abbie Richards, science communicator, influencer and researcher. Abbie, Giovanni and Stephanie, thank you very much for joining us today.
I am going to start with Dr De Gregorio. What legal protections do influencers have with freedom of expression on the internet? Are these different to those of everyday internet users?
Dr De Gregorio: This is a good question to start. We should distinguish because influencers have different kinds of legal protection. One of the questions is about the protection of free speech that influencers can enjoy when they share their content online. It is important to distinguish between the way in which they share commercial expression, so the boundaries between commercial speech and political speech. Commercial speech is regulated and addressed in a certain way—the way in which freedom of expression is conceived. The way in which we regulate advertising, for example, is one thing and the way we try to regulate political speech is another thing.
The problem is that the boundaries between commercial speech and political speech in the influencer market are increasingly melting. Influencers are going political, for good or bad; it is not important that they do not go political, of course. The risk with that is that they are protected for political speech when they address topics in the public interest. So when we think about how to regulate commercial speech, the question is about what we usually call the magnetic effect—the attraction of commercial speech inside the realm of political speech. Regulating political speech is different than regulating commercial speech. If influencers share content that can be considered political speech but at the same time comes from advertising, because they are advertising products, how can you tackle the problem of commercial speech that is also political speech?
The big struggle is that when influencers address questions in the public interest or talk about Covid-19 vaccines and vaccination, for example, they are exercising their political right, their political speech. Regulating this political speech is very hard when at the same time in a YouTube video, or whatever platform, they are advertising a product. It is not easy to distinguish when a video is just political or just commercial. This raises a lot of questions about how to draw the line, what is the threshold of regulation and what is the degree of protection of free speech for influencers’ individual environment, whether it is commercial or more political.
Q300 Chair: So you can say what you want unless you are trying to sell something?
Dr De Gregorio: It depends, because this leads to a different kind of protection. The most important thing is how a platform reacts to these different expressions, and we have seen that platforms have adopted policies, especially during Covid-19, to tackle disinformation coming from influencers. We have seen the problem of disinformation for hire, where influencers have been hired to spread disinformation. It is quite well known and has been shared on the news and so on. A big problem is also how to tackle influencers that go just political but are paid or receive money from organisations that are totally unaccountable or impossible to scrutinise in the public debate. This is a big problem. It is important to understand the platform policies, not just the public policies, when addressing influencer marketing.
Q301 Chair: That brings us on to a second point about whether or not community platform guidelines are sufficiently clear for influencers to follow. Dr Baker, I saw you nodding when Dr De Gregorio was talking. Do you have anything to add on this?
Dr Baker: I think your point about community guidelines is an important point. Often we talk about the dichotomy between freedom of expression and censorship, which is a useful rallying cry for influencers not only to attract followers but to mobilise people to spread their message further. We see this very effectively in the anti-vaccine community. I will say two things about this.
As Renée DiResta said, we have freedom of speech but we don’t have freedom of reach. The tech platforms can do a lot to limit the amplification of misinformation and disinformation campaigns, and I highlight not just strategies of limiting shares, which have been very effective, or introducing labels, but the labels need to be very clear. I don’t know if you have experienced them but sometimes when you click on a label you are given a very long article, and I think it is very unlikely that users will read the whole thing. I think that some of the users are earnestly asking questions about the efficacy of vaccines and boosters. It would be very useful if we can ensure that it is not just labels but very simple, clear labels that are visually expressing clear, reliable data.
Another aspect that has been very useful in limiting the spread of misinformation and disinformation is disabling trends. What I mean by that is preventing hashtags, which are not only a way of discovering a message but a way of bringing together various voices under common narratives. Platforms can do that. The problem we have seen is that many platforms have said, “We are disabling hashtags”—for example with “Plandemic”, the conspiracy theory film—but all you need to do is search on Twitter right now for #plandemic and you will find a series of claims all assembled under this hashtag that suggest that Covid is a scam. Yes, they have removed and disabled “Plandemic” the movie, but you see that there are always more covert ways that users can undermine these strategies.
As a second point, we need to move beyond the dichotomy of freedom of speech and censorship by realising that a lot of the techniques that especially anti-vaccine influencers and cancer frauds are using online are very predatory.
Chair: Did you say cancer frauds?
Dr Baker: Cancer frauds. If you go on to Facebook now—because of the design features of the platform, you have aspects like Facebook pages and Facebook groups—all you need to do is search groups around cancer. You will find groups entitled “The truth about cancer” and in there you find desperate people searching for a cure, suggesting that they are in the final stages of cancer or their loved ones are in the final stages of cancer. This month I have taken a screenshot where a member of the disinformation dozen is on one of those pages suggesting that chemotherapy does not work and instead to buy a docuseries. How is it that six years after Belle Gibson, an Australian wellness influencer, was exposed as a cancer fraud we still have this type of activity on social media? We speak so much about anti-vaccine influencers but this issue is much broader in the domain of health in which I work.
Q302 Chair: If they are making money from this—and I presume they are making money—that would essentially be a criminal offence if it was offline. I will give you an example. I remember a case many years ago of someone who was in the final stages of pancreatic cancer and was sold some quack medicine by an individual. His mother took that individual to the police and they were given a custodial sentence for selling this box of tricks to the poor person in the very last throes of pancreatic cancer. In some ways this is a less invasive means of defrauding people but at the same time it is much more amplified and potentially more damaging.
Dr Baker: More amplified and more targeted. We always talk about amplification with influencers and that is because many influencers have large online followings. They are able to take a fringe claim and spread it through their network of followers. Often the media or, in the domain of health, scientists and medical professionals acting in goodwill, try to debunk the claim and so it is given further oxygen and amplification. That is quite well documented, but part of what makes influencers so effective is the ways in which they can target specific communities. Often we think of influencers as having very large followings but many of the most successful influencers have very small followings and what makes them so effective is that they use a variety of self-presentation techniques such as appearing authentic, accessible, relatable. This means that they tend to be much trusted and admired by their followers.
We see this manifest in the health space with, for example, anti-vaccine influencers targeting vulnerable groups. We see this with racial minorities who may already have doubts about authority and public health authorities, but we also see it with anti-vaccine influencers targeting new mums who want the best for their children, or they will target holistic health groups. I think this targeted aspect of influencer marketing needs to be understood as having a similar impact to amplification. It is not just about the size of audience reach, it is about the actual trust and persuasive mechanisms that an influencer has, even if it is just fostering doubt.
Q303 Chair: Thank you. Abbie, I saw you nodding your head. You are not only a researcher but an influencer as well. I imagine that you have quite an interesting perspective on this. What do you have to add to the debate?
Abbie Richards: I echo everything that was just said, especially the monetisation, the financial incentives and the power that is gained when having a platform. Whether it is one of the smaller, more niche communities that are super dedicated to you or a big following of people, it comes with a lot of benefits. The trust that is built and the way that your followers view you after you have established this personality online enables you to—first, you get monetisation on certain platforms. On YouTube and TikTok, for instance, you get monetisation through ad revenue, so you get rewarded for views. If you create clickbaity misinformation that goes viral you will be generally financially rewarded, but on top of that once you have built a large following you can get sponsorship deals. You are rewarded for it in that sense as well, so you can start selling other people’s products or your own products. A lot of financial incentives come along.
Q304 Chair: Is there an incentive, Abbie, to become more and more outrageous and drive your followers deeper into—
Abbie Richards: I have seen, especially on TikTok, which is my area of research, a creator who posts a conspiracy theory video, and those do very well on that platform. They are very engaging, they increase watch time and they perform very well. Their followers then ask them for more, and that creator is now in a position where, if they want to continue building their platform—continue building the audience, with all the power, money and, quite frankly, validation that would come with it—they would have to continue pushing out more and more outrageous conspiracy theories.
Chair: Thank you.
Q305 Kevin Brennan: Welcome everybody. Should public figures be moderated differently on social media platforms?
Dr De Gregorio: This is another important question and there is a debate about that. When we look at platform policy it is not easy to see whether a platform moderates users and political figures or public figures equally. There is always a balance between freedom of expression and other rights at stake. Probably political and public figures deserve to have more space online to share their points, but at the same time the question is whether influencers are sometimes public figures so their content should not be moderated. If we think about public figures, like political candidates for example, the question is how you draw the line when sometimes even influencers nowadays are trying to propose themselves as political candidates. They are trying to do that anyway.
Q306 Kevin Brennan: Is it possible to draw a line?
Dr De Gregorio: It is not easy.
Kevin Brennan: Is it possible?
Dr De Gregorio: It is difficult to say. I think that it is not definitely possible because it is an assessment that should be done case by case. It is necessary to assess the type of statement, the single influencer, so it is very difficult for the law to draw a general line. The primary point is defining what is commercial and what is political. For public figures, it is important to have more space to talk, but having more space to talk probably leads to having more responsibility. This content on social media could be moderated in a different way.
Q307 Kevin Brennan: Donald Trump had his Twitter account shut down when he was a politician. Is he now a politician or an influencer?
Dr De Gregorio: What is the difference? These are good examples to show that there is not so much difference between a politician and an influencer. The global influencers have millions, and that could be like voters. It is pretty much the same as when voters express their likes. It is a matter of legitimacy to speak and it is connected to the idea that they feel entrusted to share more content or to make promises to their community. It is similar, in a way, to what happens in a political debate. The problem is the incentives. An incentive for a political candidate is to be elected; for an influencer, probably, it is not to be elected at this time, but to make money.
The problem is to understand where the money comes from. In the field of politics, in an electoral campaign, we have regulation for disclosure of public funds, private funds or whatever, but what about in this field? What if tomorrow an influencer gains so much consensus online and then proposes themselves at the political level? Of course they would become a political candidate, but the problem is that there has been so much unregulated space.
Q308 Kevin Brennan: Does that mean that, in practice, for practical regulation you could draw the line only when someone has declared themselves for political office and from that point they would fall into a different category?
Dr De Gregorio: Yes. This is a very formal requirement because at the very end the burden is on the influencer to disclose, and at the same time you need someone to check or have oversight on whether the disclosure has occurred or not. It is true for influencers that millions of users could say, “You have not disclosed the content” so it is quite easy to understand when this is not happening, but the problem is that micro-influencers are increasingly a larger part of influencer marketing online. Micro-influencers are sometimes just on some platforms but they are able to shape and move communities across platforms at the same time.
Q309 Kevin Brennan: Dr Baker, do you have any thoughts on this?
Dr Baker: I do. To answer your original question, yes, I think that public figures should be monitored, primarily because of their potential audience reach and capacity to impact. There was discussion about Donald Trump. Often with influencers we can tell you so much as researchers about how misinformation and disinformation spread, but we cannot always tell you the impact of that, what happens offline. However, especially with Donald Trump’s messaging during the pandemic, there have been several noteworthy studies that have looked at the correlation between his advice about, say, hydroxychloroquine prescriptions during the same period. You can see that while we can’t necessarily make a causal link, there is a corelation between what some of what I would term political influencers are saying online and encouraging their followers to act on and what is occurring in the real world.
Q310 Kevin Brennan: Abbie Richards, Twitter already applies different protections to content that fits its definition of public interest. Do you think the platforms themselves like Twitter should be making the decision on what content fits the public interest?
Abbie Richards: Not necessarily. It is a big question, but I don’t know if they have the best intentions all the time. If they are fundamentally driven by profit, will they always make the best decisions? One of the bigger issues I have found is that they might create these policies but the problem is often with enforcement. They might say, “Misinformation about vaccines is against our community guidelines” but they are unable to enforce that. Those are meaningless words if they are not going to act on that.
Q311 Kevin Brennan: On that point, I searched the hashtag that Dr Baker mentioned earlier and she is quite right, there is a whole raft of stuff under that particular hashtag. I am not going to advertise it again. They may say that it is their policy but it is there, isn’t it? What can be done about it?
Abbie Richards: It is there, and there are countless variations of it. This occurs with any offensive hashtag or misinformation-adjacent hashtag. There will be the original hashtag and even if that is removed, all the adjacent ones will stay up. A lot of the time they use what is called leetspeak, which is using numbers or different ways of spelling things to evade bans. Those hashtags also get millions of views. Different variations of spelling, tricky ways of spelling it out differently so that it can evade bans, is another key problem that we see.
Q312 Kevin Brennan: Dr Baker, do you think that the platform community guidelines are applied consistently and fairly? Is there an issue with how they might affect the diversity of the influencer community, particularly representation from marginalised communities?
Dr Baker: To answer your first question, I don’t think they are enforced fairly or consistently. If you read all of the community guidelines across the major tech platforms, on the surface they seem as though they are dealing with the issue but they are not. I will give you an example of that. First, there is a lot of ambiguity in what the community guidelines are referring to. As Abbie just mentioned, there are many ways in which savvy influencers are able to avoid AI detection. This can occur with the hashtag I just gave you an example of, with QAnon using numbers instead of letters. Other ways that we see effective disinformation and misinformation campaigns operating are through more covert strategies. Examples of that include asking questions—“I am just asking a question”. The aim is not necessarily to persuade; it is to cause doubt. Another effective strategy is, “I am sharing a personal anecdote or an anecdote about a friend”. We have just seen the disastrous effects of Nicki Minaj sharing a tweet about her cousin’s friend and the effects of the vaccine, and there is not only her sharing that with her network of followers but the media amplification that follows that.
In addition to these more covert strategies, we see attempts to hijack the meaning of a cause through co-opting existing hashtags that have genuinely good causes. We saw this a lot with Save the Children and QAnon, and I have also seen this a lot with the anti-vaccine movement and especially targeting marginalised groups. This leads to your second question. I get very concerned because you see these influencers who wield a huge amount of power, not only in their audience reach, targeting the more marginalised groups and, again, it is very predatory. It is no accident when you look at the ways in which they are targeting people who are more vulnerable and already more distrusting of authority.
Q313 Kevin Brennan: Thank you. Dr De Gregorio and Abbie, do you want to add anything to that?
Dr De Gregorio: Yes. I would like to add that it is important to focus on the role of the platforms, the role of social media especially. The space where influencers share their content or advertise their content is governed by rules defined by social media. It is very connected to the first question about legal protection for influencers speaking online. The answer to this question comes from the way in which social media define the standard of protection of free speech online. This is very connected to the question of consistency. Research shows how community guidelines and terms of services do not change continuously and across the years and also do not apply consistently. Why? There is a reason why, because the problem is about content moderation. Platforms are sometimes not aware of how influencer content or general content is moderated because it is done by AI. Platforms can’t always check or have oversight of the moderation by AI because of the problems of AI, like biases or black boxes. This raises the question about how a platform could apply the terms of services consistently if the moderation of content is by an AI system.
Abbie Richards: I absolutely agree with everything that was just said about AI moderation, but another problem is that there is no transparency over what the moderation looks like. When you report it, what exactly is the system? Who is looking at it, what is their expertise and how do we know that they will make the right call? An app like TikTok is a complete black box when we ask what that system looks like, if there are real humans making these decisions. It often feels like there is not and there is a lot of instances of the AI being racist and often ableist. Often you get videos that contain hate speech, hate symbols. When creators of colour, marginalised creators, call them out and point out how they are dangerous, they are the ones who are then banned and their accounts are punished. It does not really make sense and transparency would clear that up.
Q314 Kevin Brennan: Artificial intelligence is sometimes prejudicial dumbness. Is that what you are saying, essentially?
Abbie Richards: Yes, it does reflect our existing biases. It is not immune to those.
Chair: That is fascinating.
Q315 Julie Elliott: Good morning, everyone. You have alluded to this a little bit, but I will come to Dr De Gregorio first to ask how you would define misinformation and disinformation, particularly relevant to influencer culture. Does the trusting relationship that influencers have with their followers make it very difficult to address disinformation and misinformation?
Dr De Gregorio: The answer to the first question is that we have seen how different countries or even platforms have struggled with finding a definition of disinformation or misinformation. We have different research and studies, and a lot of scholars are working on trying to define disinformation. For example, the European Union has defined disinformation with a multidimensional approach and explains how defining disinformation means also understanding who the actors are and what the incentives are for sharing disinformation. This does not involve just the influencers but involves platforms and even states. It is not by chance that states or even governments in some countries, for example in Africa, are starting to use different strategies for using social media to spread disinformation.
Q316 Julie Elliott: Do the social media platforms incentivise spreading this kind of disinformation?
Dr De Gregorio: I do not want to generalise here. The business model of especially the large social media platforms is based on advertising revenues. This pushes forward a culture of virality. It is not important which platform we focus on. Of course there are different rules, different nuances, but at the very end platforms profit from advertising and advertising creates a paradox in the logic of content moderation. Platforms are interested in providing spaces where the user can stay online more—the more users stay online the more they can attract data and advertising revenue—but at the same time they also want some content to stay there, for example disinformation, because this content can increase virality and the money that platforms can attract from advertising.
The question is how we define disinformation. Let’s look at the business model of these actors, of online platforms. The answer, especially for a constitutional democracy, is not just about defining disinformation. Probably no one wants to define disinformation, because we think it is an important thing for democracy in thinking about the marketplace of ideas. At the same time we need to understand how to increase pluralism online, how to understand, because it is very connected to transparency. There is not just one definition of disinformation, especially for democracies—although not just for them, of course.
When we move to the relationship between influencers and their communities, this depends sometimes on the platform, because the influencer will use a different strategy depending on each platform. There are platforms like TikTok or Instagram where the story is allowed to go online and YouTube where it can have more editing and so forth. You can plan how to share content if you are not going live, for example.
At the same time, the most important thing is that the connection between influencers and their communities in spreading this information is very much influenced by the interdependency that exists between influencers and online platforms. We cannot say that influencers are part of the gig economy, but sometimes their monthly income comes from just social media. This is a very important point because the relationship with their community is mediated by what is usually the law of the platform in this case. All the incentive that the influencers have to keep their communities online comes from the rules of the game that platforms establish, for example which hashtag you can use or which viralities you can get sharing certain content rather than other content.
Q317 Julie Elliott: Abbie, you are nodding. Would you like to comment?
Abbie Richards: I agree with all that. I don’t know if there is too much that I need to add. Defining misinformation and disinformation is absolutely a tricky definition to find, but I also want to highlight the paradox that Dr De Gregorio was pointing out of it being financially incentivised to continue getting those views. Yes, that is it.
Q318 Julie Elliott: Thank you. Dr Baker, you have talked quite a bit about health in this area. Did Covid-19 expose new benefits or risks in influencer culture?
Dr Baker: It did both. It is important to highlight that while this session is particularly looking at misinformation and disinformation and includes more nefarious interests, there is a lot of potential for very useful marketing with influencers on social media. We saw this during the pandemic. An example is Joe Wicks, a fitness influencer, who for many parents who had children off school was a great source of comfort and assistance.
Julie Elliott: He was a lifesaver, I think, if you ask some of my children with children.
Dr Baker: I don’t want to walk away from this session with a kind of moral panic about influencers en masse, but to return to your question, I think that we have learned important lessons from the pandemic. Misinformation and disinformation are contested terms, but we speak about them in a general way that is important to highlight. When we talk about misinformation, we tend to be referring to false and misleading content that is genuinely intended to be true and believable. Disinformation, conversely, is intended to cause harm. Intention is key. The reason that is very important is that while we don’t always know the intention of an influencer, we can recognise patterns by studying the ways in which they communicate with their audiences over time.
I think anyone working in this space isn’t trying to suggest that platforms should be responsible for removing every aspect or every piece of misinformation—not at all—but platforms need to be informed about, very aware of and enforcing the removal of these frequent super-spreaders or, even if they don’t have a relatively large following, disinformation producers who are repeatedly trying to spread disinformation. Again, that comes down to very clear common community guidelines.
Q319 Julie Elliott: How does misinformation intersect with existing influencer communities, for example wellness communities, on platforms?
Dr Baker: Misinformation and disinformation tend to spread quite differently. Misinformation, which is when an individual genuinely believes it to be true, can be an influencer discovering a post on Facebook or Twitter, for example, resharing it and then it is amplified, not because they are intending to cause harm but because they genuinely just want to ask a question. We saw a lot of this with celebrities with 5G early on, where I don’t think the intention was to deceive or harm but, by asking the questions they did, that was the end result.
Disinformation is quite different and we see quite different tactics. There is amplification, but again going back to what I previously said, there is this micro-targeting of groups. I mentioned asking questions, sharing personal anecdotes, hijacking hashtags. In health and wellness, which you were asking about, we see what I would refer to as the Pied Piper effect, where you have some wellness influencers who, for the most part, share very generic innocuous health advice—eat more fruits and vegetables, sharing ways for so-called clean eating, sharing exercise tips—and every so often they will integrate some kind of conspiratorial post. In my field, they will encourage users to go on to another platform that is less regulated and when you go there, it is almost as though you are following a different user.
This is why one of the biggest recommendations I put forward today is for a much more rigorous cross-platform approach. Misinformation and disinformation don’t operate on single platforms. We speak so often about Facebook, and I know the platform is far from perfect. This is a much bigger problem than just Facebook. The spread of misinformation and disinformation is on the major tech platforms. Some disinformation is on Facebook but some who have been banned from Facebook are still on Instagram, despite it being owned by the same company.
David Icke, who has been banned and suspended from all the major platforms, is still spreading his conspiracy theory films, which you can buy on the streaming service, Gaia. If you go to Amazon, some of the best-selling books in the health section are anti-vaccine books. This is a much bigger problem where we need to think about the larger information ecosystem that these influencers are working in, rather than just the common criticism towards some of the major players on social media.
Q320 Julie Elliott: That is a very interesting point. Some of the things you have mentioned I had not heard of, but clearly are out there. Practically, if we look at how we move forward, how do we identify the misinformation and disinformation that is on legitimate platforms and legitimate sites? How could we suggest regulation works in that area to spot it?
Dr Baker: I do have suggestions. I think if platforms are very serious about tackling misinformation and disinformation, just like they came together at the start of the pandemic where several of the major tech companies pledged to combat misinformation and disinformation and to elevate authoritative sources, they could come together and create very clear community guidelines to combat the spread of misinformation and disinformation. The idea is that by having consistency across the major platforms they could be regulated by an independent oversight board.
Where platforms fall short—and I gave you the examples at the start of this inquiry—the oversight board could alert them to this and they could have a specific timeframe to address these issues. I suggest that instead of platforms marking their own homework—I don’t know if you have seen the transparency reports; they are not very transparent—we could have the oversight board independently publishing these results without breaching privacy issues of the specific users in question.
I also suggest specifically on misinformation and disinformation that we need to think much more broadly than news, because most disinformation is not news. Most disinformation is images, videos taken out of context, memes. When you repeatedly see the same memes accusing health authorities of acting against the public interest, that has the effect of encouraging some people to believe that there is a semblance of truth to it. We need to think beyond news claims to the more covert strategies that influencers are explicitly using to avoid AI detection.
Finally, one suggestion I have includes verification. When we talk about verification, platforms suggest that they are not endorsing individuals; they are just using it to highlight the authenticity of the individual user. I think that is a mistake, because if you went out there now and asked the public what they think that symbol stands for, there are many who would read that symbol as a sign of credibility and legitimacy. I remind you that David Icke was verified on Twitter until the end of 2020. We still have members of the disinformation dozen who are verified on Instagram.
I suggest that in the area of health that I specifically work in, why not create a separate verification symbol, where we could verify medical professionals? That could be used as a sign of credibility with very rigorous standards cross-platform. That is just one example. If platforms are very serious about tackling misinformation and disinformation, there are very clear ways that they could implement policies now that would tackle it. Again, this would be an iterative process, because technology changes.
Abbie Richards: I want to speak to what Dr Baker said about communities and the communities that are very primed for misinformation to spread. Her example was health and wellness, but there is a bunch. We see it a lot in the far right, in the spirituality community, in the people who enjoy horror and true crime. Those communities are very primed to consume misinformation and people build entire platforms by exploiting that.
On the other hand, there are people in some of those communities who want to stop that, and it is about utilising the people who are members of that community. The spirituality community is a very good example of this. There are people who are using crystals or are very into meditation or whatever it is, and that can go in two directions. You have the spirituality community split over misinformation and certain influencers do a very good job of combating it within their communities. I think that there is a lot of room for working with them, in platforming them and in using their influence within the community to help guide that community.
Q321 Chair: But that takes an editorial decision, doesn’t it?
Abbie Richards: Yes. Within those communities, just because some of them are more likely to be exposed to it and are more likely to buy into it, but there are also more niche conversations happening within those communities where you could be combating it.
Chair: Thank you, Abbie. Kevin, you wanted to come in on a point about misinformation, I think.
Q322 Kevin Brennan: Yes, thank you. Something Dr Baker said made me think about the distinction between disinformation and misinformation. As an example, in 2020 the current Secretary of State for DCMS retweeted a clip—you are familiar with this story—of the Leader of the Opposition. The clip suggested that he had been reluctant to prosecute grooming gangs when he had been the Director of Public Prosecutions, but the video had been doctored to make it appear that that was the case. The Secretary of State—she was not Secretary of State then—retweeted it and I think the comment that went with it was “interesting” or “revealing”. Is that an example of someone spreading disinformation or misinformation?
Dr Baker: There are two aspects there. Where a person has doctored the material, that is clearly disinformation—they know that they are manipulating media for a certain end and they don’t believe it to be true—but for the person spreading it, it could easily have been misinformation. There is a very clearly a solution to that. We saw Facebook, for example, during the pandemic alerting people who had shared—whether it was knowingly or unknowingly—misinformation and disinformation by tracking them. In that case, it would be a very clear strategy to let people who have spread it, perhaps with the best intentions, know that it was false.
Q323 Kevin Brennan: I don’t think it was with the best intentions, but should publicly elected officials be held to a higher standard on what they retweet when a video appears in a timeline and so on than others?
Dr Baker: Yes, absolutely, because with those positions of power, whether it is the size of your audience or the status that you have, there is responsibility. There are a couple of ways that this can be approached. One is through better education because, as you said, things may be spread with possibly harmful intentions, but there are very many times when people unknowingly spread misinformation. If there was more education for public figures and influencers about the risks of spreading misinformation and disinformation, it could be included in the community guidelines. Many of the mid to top-tier influencers have managers, and they could be educating them. It is not true for the smaller influencers, but certainly for those who have a huge audience.
Kevin Brennan: For the record, I should say that the tweet was later deleted.
Q324 Clive Efford: Welcome. Thank you for coming to give evidence today. I want to ask about artificial influencers, how brands benefit from the use of artificial influencers and how virtual influencers affect diversity, representation, pay, equality and influencer communities. I don’t know which one of you would like to start, but I would be interested to hear your views.
Dr Baker: I am happy to start. Virtual influencers are increasingly having a significant impact on influencer marketing. We now see virtual influencers, such as Lil Miquela, with millions of followers. You asked what is appealing about them and I think one of the most—
Clive Efford: What is appealing to the people who are using them.
Dr Baker: Yes. There is a couple of things. With a virtual influencer, you could use a pre-existing influencer, such as Lil Miquela, with millions of followers, but you could also create a virtual influencer from scratch whose entire narrative is used to market your brand. Then you are able to integrate the message and the product through the storyline that is attached to that influencer in ways that are not attached to other products or services. Another big benefit for marketers using virtual influencers is the degree of control they get. They can monitor what messages will be shared. There is always a risk, whether you are using a celebrity endorsement or an influencer, that there may be a scandal that will affect them or they may go off script. With a virtual influencer, you have much more control. But any influencer, whether it is a virtual influencer, a non-human influencer or an influencer, should be subject to the same standards.
Q325 Clive Efford: But is there less accountability for the message if you use a virtual influencer?
Dr Baker: Absolutely not because, if anything, the message is more controlled.
Q326 Clive Efford: If I wanted to misinform people or direct people in a certain way, maybe even to purchase something or to influence their thinking, am I less accountable for what I put out if I put it out through a virtual influencer?
Dr Baker: I suggest not, because I think that when it comes to defining an influencer the ASA makes it very clear that anybody who uses social media to advertise a product is an influencer. I don’t think that is limited to humans. I think that can include virtual influencers or somebody who uses a dog or a cat, as we have seen previously.
Q327 Clive Efford: Should it be clear? Knox Frost was used as an artificial influencer for quite a long time before it was revealed that it was an artificial influencer rather than a real person. I think there have been other examples of that. Should there be something like a watermark? If you put an article in a newspaper and you pay for it to be there, it gets labelled as an advert. Should we have something similar for artificial influencers on the internet?
Dr Baker: Absolutely. It should not be up to the discretion of the company to decide what that is. It needs to be a consistent standard because you are asking consumers to be able to identify what that is. If it is the term “virtual influencer” that needs to be consistent.
Q328 Clive Efford: I will come to the other two if you want to comment on this, but I will just go through the conversation with you first. Are there any areas where artificial influencers should not be used, for instance in the area of politics where you might want to direct people in an extremist way by creating a persona that puts out the views of an artificial Donald Trump or someone like that? Are there any areas where you would say that artificial influencers shouldn’t be used?
Dr Baker: I don’t think it is about the individual who is disseminating the message. I think it is about the message that is being disseminated. It is the content that should be regulated. If you look at the personas that people have online, it is just that; it is a persona, whether it is an artificial influencer or a human.
Dr De Gregorio: May I add something? I think this allows us to underline a very important point. Sometimes influencers are tools for companies or even for governments, who pay them. I have already mentioned the case of disinformation for hire. Research in Kenya has shown that you need probably just $15 to pay someone to share Covid-19 disinformation. Of course there is a political economy behind this organisation that wants to use influencers for spreading disinformation.
Even when we have virtual influencers, this underlines that the most important point when we want to focus on the influencer market is to understand who is behind the screen, not just the influencer. Sometimes the influencer, as a cultural entrepreneur, could have incentives to share with their community a review about a new game. Okay, that is fine, but the problem is especially when influencers go political, and we go back again to the first questions about political speech. There, we should probably focus on understanding who has backed the influencer, because it is there where they make the difference—where they spread disinformation—not, of course, when we are reviewing gaming. That raises other issues, for minors or whatever, but especially for disinformation and democracy it is important to understand who is financing the campaign. The question is whether there could be some virtual influencers that are not allowed, even deepfakes, for example.
This is a question again about whether, even using human influencers, we will be allowed to spread certain speech. It is not just, at least in my opinion, from a technical perspective whether we use a message or whether we use a person, because I think in the end the problem is not just regulating disinformation. No one wants to say what is true and what is not. The most important thing is to regulate the actors and the incentives around disinformation. This is where the game is in influencer marketing.
It is not just about regulating technology or regulating speech in this case, because regulating speech could be dangerous. Also, a lot of misinformation and disinformation is usually opinions. You cannot fact-check that because opinions are opinions per se; they are not facts or events. In this case, focusing on influencer marketing requires focusing on who the actors are, which is usually hidden, because they are sometimes government or sometimes companies—or who knows?
Abbie Richards: As an influencer in this space, I have been reached out to by marketing campaigns aimed at pushing a specific political agenda, offering me money to participate in some sort of campaign, whether or not I agree with it. There are good questions to be asked there about whether or not we want influencers taking money to participate in that, and how different that is from participating in other market economies.
In addition to that, when we talk about virtual influencers and the influencers who are created to spread misinformation, it overlooks the majority of influencers who are real humans who organically are often spreading misinformation because they truly believe it. I think that there is a lack of resources for influencers. Essentially, we are in this weird world where suddenly a lot of people have followers who trust them and see them as a reliable source of information, and that could have happened very rapidly for them.
In the span of a year or a year and a half, I went from being nobody, with no followers, to somebody with 500,000 followers who listen to me. That is a lot of responsibility and there are no resources for influencers to learn about how to prevent the spread of misinformation on their platforms and how to cope emotionally with having that many people looking at you all the time, especially when the rise to fame is so sudden. Having resources for the humans behind this, where they get their information and where they are behind the scenes talking about issues would be very helpful.
Q329 Clive Efford: Would you say that it is time for a set of guidelines for influencers, but also a code so that people can identify what the source of the information that they are being given is, so that it is verifiable?
Abbie Richards: Tools for that for sure, like tools for sharing high quality information on your platform and being a responsible influencer. I think that a lot of people want to do that and don’t know how.
Q330 Clive Efford: How would that work?
Abbie Richards: It might depend on the platform, and certainly communities have arisen organically already. I know that on TikTok there are several communities of creators who are constantly in contact with each other sharing information, trying to decipher what is true, “Is there misinformation going around on the platform? How can we address it?” It is having support and funding or advice from professionals in these spaces. If you are an influencer, the option of having somebody who understands vaccines speak with you to make sure that you are not sharing poor quality information on your platform would be very helpful.
Dr Baker: I have something to add to that. I think there are a couple of ways that influencers could be supported in this area. One is establishing a set of common guidelines, not specific to each platform but cross-platform, that are used to assist influencers and to prevent them from unknowingly spreading misinformation and disinformation. On top of that, as I mentioned, the mid to top-tier influencers all have management. The management could also be reinforcing these guidelines, so it would not just come from the social media companies; it would also come from management, who would be instructed on the common guidelines. I can’t emphasise enough how much I think they need to be common guidelines. If we start having every tech platform introducing its own guidelines, it gets very confusing and I think it is less likely to be effective.
Q331 Clive Efford: Are they guidelines or are they enforceable rules?
Dr Baker: I think both. The only difference I would suggest is that the guidelines are more informative. The rules tend to be very much instructive and I think the guidelines could indicate a little bit more about how influencers could be unknowingly duped into spreading misinformation.
Dr De Gregorio: I think it is very relevant, because it has been mentioned also to provide tools and guidelines to the influencers, like a code of practice. Guidance is not binding. Guidance could be like self-regulation to provide a system where you are coming from the top. I cannot say about the ethics of influencer marketing, but it could be important to have some rules about when you have to disclose or you do not, but without being prescriptive. Some countries have tried, even around Europe, to have codes for influencer marketing. Even in the US, for example, the FTC has worked for years on providing these guidelines, so it is interesting to look at it from a comparative perspective to other experience to see whether we can have more in common. I agree with the idea of having a common charter or code of practice for influencers that is not binding.
If it is binding, we should think about the enforcement of the rules, because that is where the problem is. Even if we ask influencers to disclose, the problem is who will enforce these rules—Ofcom or another agency, or even a court. That would require a case-by-case assessment because there are a lot of influencers online , and even micro-influencers. It is impossible to monitor all the influencers at the same time. We need the bottom-up pressure where people probably will flag some influencers. For example, Ofcom could add procedures, because otherwise it is impossible, even for an independent ministerial authority, to check what is happening online and see whether certain influencers disclose or not. The problem is also how to enforce these rules.
First of all, we can do more in a co-regulation sense, looking at rules that could be agreed between influencers and independent authorities, and of course having some rules that could be binding but at the same time could provide a mechanism of enforcement that can be enforced at the very end, because it is impossible to monitor. One of the big problems with the online platforms is about the limits for public agencies to do surveillance of content online, to monitor content online. This raises a lot of questions about freedom of speech. It relates to a question about how much the Government could impose not just surveillance but monitoring or checking what is happening online. This could affect the good side of free speech, for example, not just the bad side in disinformation.
Q332 Clive Efford: Thank you. Can I come back to you, Abbie, about TikTok? You have found that the algorithms on TikTok direct people to more extreme content. Is that peculiar to TikTok? Are the algorithms designed that way to make that happen? What is going on on TikTok?
Abbie Richards: It is not necessarily completely unique to TikTok. TikTok is a new phenomenon but all algorithms have been found, for the most part, to—
Clive Efford: Correct me if I am wrong, but your research found that you would be directed to more hateful content more quickly on TikTok than you would be anywhere else. Is that correct?
Abbie Richards: Yes. In TikTok you are able to consume more content than in any other platform and more rapidly. We are looking at such short videos and they are immediately fed right into your screen. You just scroll. There is no action of clicking. There is no action of reading a title and clicking it. For the most part, you are on your “For You” page and scrolling. In our research we started by creating a brand new account, following 14 accounts that were known to post transphobic videos, and then started scrolling our “For You” page, exclusively engaging with transphobic content. We monitored the first 400 or so videos, and that resulted, first of all, in an increase of more transphobic content that was fed to us but also homophobic content, misogynistic content, racist content, then conspiracy theories, anti-Semitism, calls to violence, pro-fascist content, and that having you—
Q333 Clive Efford: Was that content within TikTok or was that, as we have heard earlier on, moving you on to other platforms?
Abbie Richards: That was exclusively on TikTok. It is possible to have opened up an account at breakfast and be fed neo-Nazi content by lunch.
Q334 Clive Efford: Do extremist influencers behave in a different way on TikTok?
Abbie Richards: Yes. They are much more niche and much smaller. They do not get the amplification and the reach, but they can still syphon people off into other platforms. A lot of times they are pushing you into a Discord server or into a Telegram channel—other platforms that are less monitored than TikTok.
Q335 Clive Efford: In some extreme cases they will be directed to other platforms where there is less regulation?
Abbie Richards: Yes. A lot of the time you will get more dog-whistles, a little bit more hidden, less overtly hateful content, and then those people are pushing you into other alt platforms where they can have private conversations. In general, there is a huge problem on TikTok with its inability to monitor for dog-whistles and hate symbols in videos, user names and profile pictures. We see it all the time.
Q336 Clive Efford: Given that TikTok is owned by a Beijing-based tech company, how would the guidelines and the code that we have just talked about influence TikTok?
Abbie Richards: I can’t speak very much to TikTok’s international relations. My understanding is that its servers for the US are based in the US, and it says that all the oversight and decisions for the moderation happen there for that country. I really do not know about its—
Q337 Clive Efford: Really? Is that the same for the content in the UK? Do we have servers here that serve TikTok?
Abbie Richards: Yes, there is a TikTok UK.
Q338 Clive Efford: Even so, the extremist content that we have just been having an exchange about is still available on TikTok UK and in the USA?
Abbie Richards: Yes. You can access TikTok UK. You are more likely to be fed content that is from your own country. You are more likely to be fed content that other people like you also enjoyed, and there does seem to be some sort of location feature where it will feed you content from your country. That is not to say I cannot, while scrolling, be fed a video from somebody who posted on TikTok in the UK, Russia, Australia or wherever. You can absolutely reach those. They are not individual little groups.
Q339 Clive Efford: The point I am trying to get to is this: if we were to have a code and guidelines, is it within our jurisdiction to deal with that sort of extremist content?
Abbie Richards: Yes, I would think that you can. At least for TikTok UK; I believe so.
Chair: We are going to move on to our second panel, because unfortunately we are running a little bit short on time. I want to say to Dr Baker and Dr De Gregorio, in the room, thank you very much for joining us today. Your testimony has been very interesting. Abbie Richards, as well as saying thank you, we wish you a happy Thanksgiving in Boston. I noticed that you had some coffees going. I cannot blame you—you must have been up very early. Thank you very much. It is greatly appreciated. We are going to take a short break as we sort out our second panel.
Examination of witnesses
Witnesses: Dr Robyn Caplan, Becca Lewis and Sara McCorquodale.
Q340 Chair: This is the Digital, Culture, Media and Sport Select Committee, and our inquiry into social media influencers and influencer culture. We have our second panel today. We are joined remotely by Dr Robyn Caplan, senior researcher at Data and Society Research Institute; Becca Lewis, graduate fellow and PhD candidate at Stanford University; and Sara McCorquodale, chief executive and founder of CORQ. Sara, Becca and Robyn, thank you very much for joining us today. It is much appreciated, and gold stars particularly to those who are joining us from across the Atlantic at this very early hour. Thank you.
I am going to put this first to Dr Caplan. We have heard that influencers are often defined by their commercial relationships, and they produce content about their everyday lives. Would you challenge or expand on that description based on the research that you have carried out?
Dr Caplan: My research is primarily on YouTube, and what I studied was YouTubers’ financial relationships to the platform company and how they benefit from advertising as being part of the YouTube partner programme. I have also studied, as part of that, the impacts on YouTube creators when YouTube unilaterally changes the circumstances of the contract that it has with them, inserting new things like newer advertiser-friendly guidelines or newer ways of applying advertiser-friendly guidelines that can then end up impacting creator revenue. Part of that study looked at how YouTube creators try to diversify their revenue streams as a way to combat their reliance on YouTube as an employer.
Q341 Chair: Thank you. Becca, do you have anything to add to that?
Becca Lewis: Yes. I agree with everything that Robyn said, and I also have focused primarily on YouTube. I will add that some YouTubers and other influencers who focus specifically on political content use a framework that is essentially what you have said, their everyday lives. They are speaking personally about themselves but they are doing it in a way that highlights their own political ideals. They are speaking about how they came to hold their political beliefs, what it means in their own day to day, why they are happy with their political beliefs and why their political beliefs have changed. You end up with a hybrid format that sits somewhere between news commentary and blogging.
Q342 Chair: Sara, as well as commenting on that, how would you say we should define an influencer as an occupation? If it was a job description, what do you think it would be?
Sara McCorquodale: I think that it is someone who is creating editorialised content about their life and they are doing it every day. That is the thing. A lot of people in the early days did not understand the appeal of influencer content because it seemed very mundane, someone showing you the contents of their wardrobe or, “Come to the shop with me” or, “Listen to me comment on something that has happened in the news”. The power of influencer content is that it has a cumulative effect. For the audience, that accumulation occurs often over a period of sometimes 10 years. This person is basically documenting their entire life, sometimes across several platforms, and they are doing it in a very editorial way. They are sometimes perhaps pulling out the more fun bits, the more glamorous bits. It is not exactly reality television. They are trying to entertain. I think that is probably the best definition.
Q343 Chair: Obviously there is a show aspect to it all. It is almost like your life becomes the show. Where do you think people individually can draw the line in that respect, not just in being influencers but those who engage with influencers?
Sara McCorquodale: I think that often the lines get blurred. Often the influencer is going to create more of the content that their audience enjoys because they get greater engagement and that then attracts better commercial partners or more commercial partners. It allows them to make more money and turns this job of influencer into a very lucrative career. For example, if they find that a certain type of content performed better for them than anything else, they will keep creating that content.
Sometimes it is not necessarily what the influencer wants in the end. A lot of people leave social media because they almost feel like they cannot bear the pressure of having to share these very intimate moments of their life, but that is what the audience wants. I think the lines get blurred there. I wrote a book on this and when I spoke to a lot of influencers they said, “I think that my followers potentially do not understand that this is a performance”, and it is a performance.
Q344 Clive Efford: Becca, how would you define alternative influencers and what distinguishes them from mainstream influencers?
Becca Lewis: I have particularly looked at influencers who are trying to spread political content or news or information about the world to their followers. What in particular makes them alternative is the fact that they form an adversarial relationship with what we would think of as established legacy news outlets. Not only are they telling their followers that their followers should be rejecting more mainstream or official news sources but they are also claiming that they themselves are better suited to convey information to their audience. In fact, a lot of their credibility with their audience comes from the fact that they are not affiliated with a major newspaper or a major television news channel. They take the fact that their audience can see intimate elements of their lives and feel a very strong connection with them, and they use that as the reasoning through which their audience should trust them.
Q345 Clive Efford: Is the relationship with their customers or their followers and those of a mainstream influencer significantly different? Is it that an alternative influencer has to be extreme to maintain that relationship with their followers and has to almost demonstrate themselves as being outside of the mainstream consistently? Is that significantly different from a mainstream influencer?
Becca Lewis: Not always and not necessarily, but for many people who come to adopt these ideas, the reason they feel a sense of disconnection from mainstream media is because of a broader disillusionment frequently with our mainstream media and political institutions. Generally, there is some form of alienation there but it does not always make one extremist. I think that it opens up the door for other ideas to come in, so it increases the risk of someone’s extremism.
For the alternative aspect in itself, extremism is not part of that definition. The tricky piece is that not only do they feel alienated from mainstream media, but they frequently feel alienated from the government. They also feel alienated from—Robyn can speak a bit more to this—the people running the platforms themselves. A lot of this becomes as much about rejecting these authoritative structures as it does about anything else.
Q346 Clive Efford: Do they feel alienated so much that they cannot work with mainstream commercial brands? How do they make money?
Becca Lewis: It depends. Some of them have sponsorships, although, depending on how extreme they are, some commercial brands will reject working with certain creators. Some of the creators will end up working with stranger and stranger brands. Some of them end up relying on certain—
Q347 Clive Efford: Can I come in there and ask whether they are more likely to be advertising harmful products?
Becca Lewis: Not necessarily. Sometimes they will end up working with more technological products, so things in cryptocurrency or VPNs. It just depends, but not necessarily. In fact, some of the influencers who do incredibly harmful advertisements are incredibly mainstream and not political at all. In the past year there has been a lot of controversy around influencers on Twitch advertising gambling services. I would not say that there is a correlation between how harmful the product being advertised is and how extreme the person is. In fact, many times extremists can make their money without advertising revenue. They cultivate an audience and get that audience to subscribe to them on Patreon or another subscription service. If their audience is devoted enough, they do not even need to have brand engagement.
Q348 Clive Efford: How do we regulate, if at all, how we advertise and how do we have oversight of alternative influencers? Do you have an opinion on that?
Becca Lewis: I might default and let Robyn take that, because I believe she looks more at the actual regulation side of it.
Clive Efford: Sara and Robyn, if you want to comment on anything that we have just talked about, please do.
Dr Caplan: I have a couple of things to say. First, to the previous questions to the other representative, I want to mention that there is a wide berth of what would count as an influencer or a creator. Each creator has their own boundaries around their public and private life. I would not say that the tendency is towards more and more publicness. For a lot of creators there is a tendency towards creating bigger walls around their private life and what they are actually providing to their audiences.
To the second point that you asked Becca about on the different characteristics of extremist or alternative influencers, there are lots of genres of influencers. There are beauty influencers, LGBTQ influencers and political influencers. There are horror and prank ones. Each genre has its own specific characteristics and strategies to engage with its audience. In all of those, though, there are some that are common to the platforms they are on. If they are all on YouTube they may be using some similar strategies across the board, and then within those genres they will have strategies that are very specific to their audiences. That is where you will start seeing the differences.
Lastly, advertisers know this so that is why they are going to specific genres. A shampoo company is more likely to go to a beauty influencer. Love Berry is more likely to go to a family channel. Something like a VPN is more likely to go to a genre like alternative influencers that tend to be very white and male to sell a product that they think will be useful to that person’s audience. That is where you will start seeing some of the distinctions.
A lot of regulation, at least on YouTube, is being done by the platforms themselves. They have specific rules around how content is monetised or not monetised and these rules are quite broad. They are implemented largely by automated means, so through algorithms that are doing things like parsing titles and keywords for anything that could be violating their guidelines. Then the platform basically removes any advertising from that channel or from the video specifically.
Those rules were put in place to address very specific concerns at different points in time in YouTube’s history, not really to address conspiracy theories and alternative influencers as much as you might think, but they impact users across the platform. They can impact alternative influencers but they are also impacting family channels, makeup influencers, basically everybody across the whole platform. Most of that regulation is being done by the platforms themselves.
Sara McCorquodale: On regulation, when a video is demonetised—so if the platform decides that it will not allow advertising to occur before or after or during that video—the influencers go on Twitter and they really go for the platform. There is a lot of feedback that happens all the time on the platforms, especially YouTube if it demonetises content. While the platforms are potentially trying to regulate, they also know that the people who attract users to their platform are the creators and they have to keep them there. They have to keep them happy, especially a platform like YouTube.
Definitely some people are losing their commitment to the platform due to falling revenues for the creators because of increasing regulation. If YouTube does not have its creators, it is Myspace; it is dead. While it is very important for the social media platforms to deliver regulation, I also think that an independent body that understands this space is very much required.
Dr Caplan: Can I add one more thing to that? I noticed in my research that it is not just the alternative influencers who have this sceptical eye towards traditional media. Most creators on the platform have it. One of the reasons why that is happening is because creators see there being a preferential treatment between platforms and traditional media organisations and platforms and creators. This kind of scepticism and this ire towards traditional media is happening across the site, mostly because of these financial relationships.
Q349 Julie Elliott: Good morning, everyone. How do social media platforms support the connections between racist, antisemitic, homophobic, transphobic or misogynistic content from various influencers? Becca, perhaps you can answer that one.
Becca Lewis: There is a range of ways that platforms do this, and it ranges from lack of moderation and focus on these all the way to overt support of it and promoting creators who do this. As Robyn mentioned, there are a lot of ways that platforms could be regulating this, but they are not. For example, the platforms can take down videos altogether. They can ban certain influencers if they choose to. Even if they choose not to ban them, they can choose to demonetise a creator and no longer place advertisements in front of their videos. They also have more subtle techniques. They can not place them so much in the recommendation algorithm or on their home page. They can suppress them in search results. There are all sorts of negative actions that could be taken that either they are not taking or that are incredibly difficult from the outside to tell whether they are taking these actions or not. Everything that I looked at—
Q350 Julie Elliott: Do you think they are taking any actions?
Becca Lewis: Yes, they are to some extent. Usually, it takes some amount of outside pressure, public pressure. Their tendency is to try to keep everyone on, because the more people who stay on the platform, the more advertising revenue for them. As Robyn alluded to, there is also a somewhat perverse set of incentives where the people who are the most popular are the ones who the platform has the most vested interest in keeping on the platform and not kicking them off. They bring in a lot of revenue and, as Sara mentioned, have the biggest megaphones that they can use to criticise the platforms. Yet those are the people with the biggest audiences and the most power. It is the most harmful when they are spreading hate speech or disinformation.
Q351 Julie Elliott: Do you think the platforms benefit from amplifying the influencers who post extreme content?
Becca Lewis: Yes.
Julie Elliott: Do you all agree with that?
Dr Caplan: I would say yes, except that many of these big creators or the people who platforms are trying to keep happy are not influencers; they are public officials. It is hard to say where we are drawing that line. A set of documents was made public by The Wall Street Journal several months ago about Facebook’s cross-check program, which showed that Facebook regularly whitelisted powerful users and public officials and those people were able to skirt moderation rules.
My research has found a similar pattern on YouTube—that YouTube tiers governance between different user groups. In most cases, the people who are at the highest end of those tiers are public officials in traditional media sources, not necessarily influencers, although influencers can go up that ladder, depending on how much influence they have.
Q352 Julie Elliott: Without naming people, what kind of people are you talking about? When you are saying “public officials”, what kind of—
Dr Caplan: The Wall Street Journal article showed how, for instance, Donald Trump was able to skirt the rules on Facebook. His content, especially one piece of content where he said, “When the looting starts, the shooting starts”, scored 90 out of 100 on Facebook’s internal algorithms but was not removed. That was because he was on a special list within the company that was full of users that the company saw as being PR risky if they were removed. His content was allowed to remain on the site because, as Sara mentioned, they are worried about that blow horn.
Sara McCorquodale: Yes, and I think that definitely since the start of the pandemic we have seen public officials and, for example, mainstream celebrities start to behave like influencers. They are starting YouTube channels and TikTok channels. The idea that an influencer is someone who is entirely self-made in their fame, as opposed to coming from these enormous traditional media platforms, is quite outdated. We are in a space where the word “influencer” reaches across many different types of entertainers and broadcasters.
Q353 Julie Elliott: It is very varied, isn’t it? Do you think alternative influencers take advantage of platform design to generate visibility for their content and increase their audiences?
Becca Lewis: Yes, absolutely. As with many influencers and content creators on the platform, they have become incredibly savvy at using platforms to their advantage. For example, people can develop their own expert knowledge in search engine optimisation. Influencers become incredibly familiar with how and when their own videos get ranked highly in the algorithm and they try to build their content towards that. You will see, for example, a lot of right-wing or far-right influencers using social justice terminology in their video titles because they know that that will rank more highly in search results of people curious and asking about these social justice issues. There is a lot of work going on in that sense.
The other complicating factor to all of that is that we talk a lot about advertisements and revenue driving things. There are a lot of non-profit organisations with political goals that have incredible amounts of resources and partner directly with influencers to get their message out. Often those are the types of organisations that are able to do a lot of research and devote a lot of resources to things like hiring social media managers, developing search engine optimisation techniques and so on.
Q354 Julie Elliott: Becca, if we turn to look at the process of online radicalisation, what role do you think influencers play in that area?
Becca Lewis: They play a really big role, and it is difficult because the actual data that we have on audience numbers and how viewers are consuming content is very minimal. We don’t have access to a lot of that data. We can see directly influencers themselves getting drawn further and further to the right. We can see the way that they interact with their audiences, other creators and influencers and the platforms themselves. It is quite common to see people who may have started out in a more mainstream area of politics over time getting more and more extremist at the same time as they are getting more and more popular. There are people with millions of subscribers promoting incredibly extremist ideas who were not not doing that maybe five years ago. We can track radicalisation through the influencers themselves.
Q355 Julie Elliott: How effective do you think platforms are at identifying this radicalisation content and moderating it? If they are not, how do you think they can do that?
Becca Lewis: It is very difficult. I think that many of them are quite good at tracking particularly the popular ones. They know who the popular influencers are on their websites in each vertical but, as both Robyn and Sara have been alluding to, there is a real reluctance to get involved, particularly for influencers who, even if they are promoting extremist ideas, are marketing themselves as centrist and reasonable. There is a lot of strategy involved in influencers claiming that their ideas are quite mainstream, even if they are quite harmful. Sometimes those harmful ideas are quite mainstream. It is easy for influencers to claim that any amount of moderation against them is a form of political censorship against them specifically or against certain political parties—that certain ideological positions are being censored. The platforms become reluctant to act against that when they think that there will be retaliation from that perspective specifically.
Dr Caplan: There is a dynamic and platforms are very aware of this. When they remove a piece of content, that can actually amplify it. If a person already has an audience and platforms remove content, that person can say, “See, I’ve been telling you that these platforms are biased, I’ve been telling you that they are censoring us” and they provide that as evidence. That ends up strengthening the bond between themselves and their audience members. Media effects are a hard thing to prove. When we say influencers are radicalising or pushing people to the extremes of politics, politics itself is becoming more extreme. It is very difficult in any sort of media analysis to determine where that cause is coming from. I wanted to make sure that that is established.
Sara McCorquodale: It is definitely part of a wider context, and I think a contributing factor to all of this is TikTok. I would say that the rise of TikTok has started this process of radicalisation much earlier. The user base of TikTok is being exposed to this fast, often very ideologically driven content at a very young age. It starts in a way that is very fun. For example, at the moment there is the “devious licks” challenge where kids are challenged to damage school property and film it. It starts with this very early challenge that is fun, and in a way the trend and the community is the influencer as opposed to a specific individual. We are seeing this soft radicalisation happening in TikTok and it paves the way to something that is a lot more serious.
Q356 Julie Elliott: That is interesting, because a lot of the research focuses on YouTube, so TikTok is doing it that way. Are there any other examples of platforms doing this kind of thing in different ways that we should be aware of?
Sara McCorquodale: All my research at the moment is around what is happening on TikTok and how niche communities are developing now and growing very quickly, but so niche that they are under the radar and they are using language that is quite hard to interpret if you are not very familiar with the platform. For example, there is a niche on TikTok called “stripper tok” and a lot of it encourages users to get into stripping and “look how much money you can make” and it is only coming from one perspective. It is not showing the full truth about that type of work. Along with that comes the language that sex work on TikTok is called “accountancy”. If you were not aware of the platform, if you didn’t have that language you would not necessarily be able to interpret what that means. We talk about these platforms as singular platforms, but there are hundreds and thousands and millions of worlds that occur within each of the platforms. I think most people have not got to grips with that yet.
Julie Elliott: Yes, I have never heard of that.
Dr Caplan: There are lots of types of media that people interact with in their lives and, media has a tendency to glamourise different things. There are movies and television glamourising various parts of lives, and magazines did as well. We can’t take this attitude that these platforms are coming in and doing something wholly unique that is different from what media has done in the past. I think that we need to be very careful with how we talk about media because we are creating the dichotomy of new media is bad, old media is good, but we had many of these similar topics in the mix of themes in older media.
Julie Elliott: Believe me, we have had experience in this country of old media.
Becca Lewis: I agree with Robyn. On the one hand there are a lot of new subcultures that can be quite damaging. On the other hand it is important for influencers who are getting quite big to operate across platforms. Influencers, particularly influencers operating in risky political subjects, know that they are at risk of being banned from an individual platform, so they operate across multiple platforms. It is not only that, but those who are most successful attempt to get on to political TV shows also. I am more familiar with the context of the United States but, for example, many of the most successful far-right political YouTubers will end up also being commentators on Fox News and there will be a lot of sharing of content between those spaces as well.
Julie Elliott: Thank you very much.
Q357 Kevin Brennan: Thanks, everybody, for your evidence. Do influencers tend to leverage their followers in harmful ways? Are there features of the culture that put their audience at any risk?
Becca Lewis: Yes, and I think sometimes it happens knowingly and sometimes it happens unknowingly. My co-authors Alice Marwick, William Partin and I have written about what we call blueprints for harassment. These are videos or pieces of content where one influencer will target another influencer or maybe someone who is not even an influencer. They will just target another individual for criticism. They will accuse them of violating some sort of morals. They will frame this person as worthy of scorn and derision and even if they don’t actually tell their audiences to harass this person—even in some cases when they explicitly tell their audiences not to harass this person—the audience will still see this as a blueprint for, “Let’s go and use that same tactic against that person”. Even if each individual audience member who responds to the target makes only one post to them and says only one thing to them or only comments on their video once, that still can end up with the network of threats becoming huge. People can face traumatising harassment after a piece of content has been made from one creator against another. Right now the moderation policies on social media platforms don’t really have a good way of grappling with this network harassment.
Q358 Kevin Brennan: As a follow-up to that, I will come to Robyn if you want to add something and also ask you this. Becca was talking there about moderation. Are influencers and influencers’ content moderated differently than ordinary users by social media platforms?
Dr Caplan: In many cases, yes. The content that I have studied is YouTube. In a paper I co-authored with Tarleton Gillespie we found that YouTube creates tiers in how it moderate its user groups. Those tiers are based on institutional powers to the platform. Platforms moderate media organisations differently than they do influencers or creators, and they moderate influencers or creators differently depending on what tier they are at in the YouTube system. We found that the vast majority of them are moderated in the same way, unless you are a media organisation. They are moderated mostly through algorithms that are parsing titles and keywords and transcripts for various offending terms.
We found that the real difference came in how people could contest those flags and that moderation. The longer you have been at YouTube the more you have gained access to various perks like going to a YouTube mega space, which is an actual physical space that exists where they will let you use things like camera equipment and video editing or you might have been put in touch with a manager of a network. You can tell how much I can see these platforms moving towards media companies because I am using terms like “network”. That becomes a point of contact for these people to be able to say, “Hey, my content got flagged. Can you please remove the flag or can you please remonetise my content?” and that means it can happen much faster.
Q359 Kevin Brennan: Are there any social or cultural implications of that tiered moderation structure? I am asking you, Robyn, I think you are nodding but if anybody else wants to come in after Robyn, please do. What is your view on that?
Dr Caplan: My view is that generally platforms came in as participatory spaces, and that is what they sold to users and creators in making their audience base and putting a lot of labour into making content for the platform. We see that these platforms are very beholden to advertisers. They have an incentive to basically reinstitute existing power dynamics that were happening within older media forums on to their platforms. We are starting to see platforms moving in the direction of older-school TV networks.
Becca Lewis: I know Robyn has written about this in her research, and I have found it in some of my more recent research as well. It is easy to think of these influencers as extremely powerful nefarious figures but in a lot of ways, no matter how successful they are, they don’t feel powerful because often they don’t have the same infrastructural support that a celebrity in an earlier media system would have had. On top of that, to be an influencer or a content creator trying to make your living online is to be beholden to the content moderation policies of these major platforms. Frequently the platforms will not explain their decision-making processes to the creators unless you are at the absolute top tier.
Q360 Kevin Brennan: I was going to ask you about that, Becca. This is not a transparent system of moderation structures to creators and to users. It is shrouded in some fog. Is that fair to say?
Becca Lewis: Absolutely, and in fact it causes a great deal of anxiety and living in fear for the creators who spend hours and hours and hours on a piece of content. They post it online, expecting that will be a piece of their income going forward, and then it will get yanked or demonetised without explanation. People are left guessing as to why this piece of content can’t help them make their living.
Dr Caplan: The vast majority of influencers are not people with huge followings. The vast majority of influencers have very small followings. I think it is very important to note that. We need to always remember that the people who are targeted by influencers, or by features of the internet that enable things like harassment to happen much more easily, are the same people who are targeted by platforms in the demonetisation of content. It is people who have historically had less power in our societies and have less ability to address these issues and concerns, either with the major institutions like platforms or in the dynamic with existing racist and sexist structures in society.
Sara McCorquodale: You find that with a lot of the influencers who are very successful, it is often because they are having conversations and representing a part of culture or a community or society that is not represented elsewhere. That can be a really positive thing. In a recent example, a YouTuber called Amin Mohammed, who is known online on Chunkz, spoke about his decision to stop music because he was becoming much more nourished by his faith—he is a Muslim—and he talked about how important Islam is to him. He said that giving up music meant that he could fully commit himself to Islam and he found that to be a positive thing. If you look at the comments under that video, young people are talking about it and their faith in a very positive way and having supportive conversations. When influencers create content like that, part of the reason it has such a big impact is because often we don’t see those conversations happening on mainstream platforms or through mainstream media.
An umbrella thing to say is that if influencers are very good and very successful it is because they are creating content that touches a nerve, positively or negatively, and often they can build very positive communities out of it.
Q361 Kevin Brennan: To nail this down—we don’t need long answers to this, just what you think—do you think that these moderation structures need to be made more transparent? Do you think the Committee ought to suggest that they should be made more transparent to both creators and users? A simple answer will suffice.
Becca Lewis: I think yes and no.
Kevin Brennan: So it is not a simple answer, but go on.
Becca Lewis: Transparency itself can be a push-and-pull game. The reason, to a certain extent, that companies will keep some of these decisions under wraps is that as soon as they make them more transparent, people can game them more easily. The minute they say, “Here is the reason why your video got pulled, we are banning X, Y and Z words” people start developing ways to get around that policy. There is always a little bit of a game of cat and mouse. I don’t think transparency itself in the exact rules will necessarily follow but I think it is very necessary to be much more transparent up front about what the values and the starting point for content moderation are, clarifying, “Here is what we are valuing on this platform and here is what we are not”. Right now they try to stay as vague as possible because they want to appeal to as many people as possible, but to pretend that they are not having any value system in their moderation policy is—every moderation system has certain values and they are not being transparent now about what is guiding their thinking.
Q362 Kevin Brennan: Giving the basic user guide but not the technical manual as to how it is implemented—is that essentially what you are saying, Becca? Maybe that is too vague, so I will put it another way: giving the principles but not the complete details of exactly how these decisions are taken. Is that what you edging towards?
Becca Lewis: I think that is better, yes.
Kevin Brennan: Robyn and Sara?
Dr Caplan: All of this needs to be made a lot more transparent. The Facebook cross-check program and my work on governance showed that even when the content policies themselves are made quite detailed, that tells us very little about how they are implemented behind the scenes. Having more insight into the processes that platforms are taking to implement this content is really important. Having more insight into how many workers they have dealing with this, where those workers are located and the steps they are taking to preserve the wellbeing of those workers is really important, because these people are often on the front line of a lot of the bad stuff on the internet. Those are all areas where we can see a lot more transparency.
To Becca’s point, yes, platforms need to be a little bit more forthright about their politics. I am concerned that they don’t have them as much as we think they do, and that they tend to go in the direction of the wind of power. We saw that with Donald Trump and the move that Facebook made as they started to see that he was no longer going to be in office. We assume that they have a certain set of politics, but they might not.
Sara McCorquodale: I think it is important to realise that these platforms exist in a very fast-moving culture and often influencers will create content in reaction to something that has happened in the news or politically. Maybe the platform decides that is unacceptable, but it is such a reactionary space that it is very hard to keep up with that in a platform’s regulations. It is very to hard to say, “These are all the things that we find unacceptable”. You would never be able to have an exhaustive list, is what I am saying.
Becca Lewis: Could I add one last thing to that? I already spoke, but I think that there are other ways to be transparent beyond purely about the decision-making process. For example, there can be more transparency around whether it was an algorithm or a person that made the decision, and that can be very helpful for people. For creators it is not only about the transparency of why something got demonetised but also that when they feel something has been unjustly demonetised or taken down, they feel there is no recourse for action. Unless you are at the absolute top tier of creators, usually there is no person who you can go to and say, “I think you made a mistake”.
Kevin Brennan: Understood, and it is frustrating in every aspect of life when that happens, isn’t it? I can see behind you, Robyn, that the sun is coming up in Texas.
Becca Lewis: It is Brooklyn, but yes.
Kevin Brennan: Okay. Have a happy Thanksgiving.
Chair: I think that concludes our session. Thank you all for giving evidence today. It is greatly appreciated that you have taken the trouble and got up early, and everything else. It has been really interesting. Thank you. Dr Robyn Caplan, Becca Lewis and Sara McCorquodale, thank you and happy Thanksgiving. Take care. That concludes our session.