HoC 85mm(Green).tif

 

Digital, Culture, Media and Sport Committee 

Oral evidence: Influencer culture, HC 258

Tuesday 18 January 2022

Ordered by the House of Commons to be published on 18 January 2022.

Watch the meeting 

Members present: Julian Knight (Chair); Steve Brine; Clive Efford; Julie Elliott; Damian Green; Simon Jupp; John Nicolson; Jane Stevenson.

Questions 363 - 403

Witnesses

I: Iain Bundred, Head of Public Policy UK&I, YouTube; Ronan Costello, Senior Public Policy Manager EMEA, Twitter; Tom Gault, UK and Northern Europe Public Policy Lead, Instagram; and Elizabeth Kanter, Director of Government Relations and Public Policy, TikTok.


Examination of witnesses

Witnesses: Iain Bundred, Ronan Costello, Tom Gault and Elizabeth Kanter.

Q363       Chair: This is the Digital, Culture, Media and Sport Committee and this is a hearing into social media influencers. On our first panel today, regarding social media influencers, we welcome Iain Bundred, head of policy at UK and Ireland YouTube; Ronan Costello, senior public policy manager EMEA at Twitter; Tom Gault, UK and Northern European public policy lead at Instagram; and Elizabeth Kanter, director of government relations and public policy at TikTok. Iain, Ronan, Tom and Elizabeth, good morning and thank you for joining us.

Before we begin, I am just going to see if there is anyone who wishes to make any declarations around the table. I just declare that I am the Chair of the APPG for new and advanced technologies.

Just as a point of order, after the first session we will be taking a short break because we will then be switching to our Sub-Committee, where we will be continuing our examination into online safety.

I will put this question to Iain but it is for everyone. What sort of research, Iain, do you conduct on behalf of YouTube into the numbers, the diversity and the revenue streams of the influencers that you host on your platform?

Iain Bundred: Good morning and thanks for having me. First and foremost, I think I should stress that our approach to creators on YouTube—we prefer the term creators—is very much about a partnership approach. We see anyone who posts a video to YouTube as being a creator, so in that respect everyone on our platform who is creating videos would qualify.

To deal with your specific question around research, we commissioned Oxford Economics to look at the economic impact of the creator ecosystem, and I think we put that in our written evidence. They found that in 2019, YouTube contributed around £1.4 billion to the UK economy through its advertising revenue sharing model. What I mean by that is that when advertising is shown on a YouTube video, as long as the creator is part of our YouTube Partner Programme, they get a share of that model.

Regarding specifics on the profile of various different creators, we do conduct research in different geographies and I am happy to detail the different profiles we have on the platform. At the core of it, everyone who is creating videos is a creator and we will look to support those, whether it is traditional media—the likes of the BBC or Little Dot Studios—or less established performers, whether that be a Joe Wicks or someone like Adam B, who has recently gone from creating videos in his bedroom in Belfast to being a Blue Peter presenter.

Q364       Chair: To drill down a bit there, you set up the answer to the question but you did not actually answer the question. What types of research do you do into, as I said, the numbers, the diversity and the revenue streams coming to your creatives?

Iain Bundred: We have a self-ID approach in the US, where we asked creators to do that, and we have explored asking other markets to do that as well. We do not have a set of data on that right now in the UK but, for example, we do look at the number of people using the platform, the number of creators in the UK who are making more than five figures—that is up 35% year on year—and the number of people with large subscriber bases. In the UK, for example, 7,500 channels have more than 100,000 subscribers. That is also a 20% increase year on year. Over 1,000 UK channels have over 1 million subscribers.

Q365       Chair: Effectively, what you are looking at the moment is purely from the revenue angle when it comes to the UK. You do wider metrics when it comes to the US but in the UK it is primarily revenue. You just mentioned there the amount of money that they make, and also the number of followers and people who interact with them. Is that a fair summation of what you just said?

Iain Bundred: Yes. Also, in the Oxford Economics report, which is available on our “How YouTube works” website, we do a bunch of surveys of creators, asking them how they feel about the products, how they work, and we continue to poll our creators on an ongoing basis. At the moment, we do not ask them for specific—

Q366       Chair: It is more by the numbers at this stage rather than social accounting about the phenomenonthe way in which they effectively interact with wider society. It is more about just the basic numbers, but also a survey-type approach.

Iain Bundred: Yes, you could characterise it that way. What we are looking to do is understand how creators are growing our platform and to share that data with the creators so they can understand what works. A huge part of what we do actually is sharing through our creator studio massive amounts of data with the creator themselves, so that they can understand where their audience and their revenues come from. They can see through their online dashboard specific information: for example, what countries their viewers come from and various other pieces around the demographics in terms of age, but we do not go into breakdowns of ethnic minorities and so on.

Q367       Chair: Ronan, I will ask you the same question effectively in terms of the research you conduct into the numbers, the diversity and the revenue streams of the creators or influencers that are on your platform.

Ronan Costello: This is perhaps even trickier for Twitter because influencer, I think, is an even more nebulous term on a platform like Twitter, where a lot of people have come to the platform with already high profiles. You might have an instance where we potentially have less home-grown influencers on the platform than other platforms represented here today might have.

People often come to Twitter when they are known publicly already, be it from politics, sport, news and so on. Again, I think the term influencer is trickier in that regard. However, in one respect, we are trying to catch up in the monetisation opportunities that we provide for influencers on our platformour creators on our platform. For example, in the last year or so, we have given content creators on our platform the option to add a tip jar to their account, such that their followers can remunerate them directly in appreciation for their content.

We have also talked about, have launched and are testing features like super followers where you can become a de facto subscriber to a particular account and, as a subscriber of that account, you as a follower get more access to exclusive content, but then the content creator themselves gets an additional revenue stream directly from their own followers.

In that regard, as I said, we are trying to enhance the benefit for the creators on Twitter and to put them in a better position on the platform.

Q368       Chair: The super influencers, is that your version of OnlyFans?

Ronan Costello: No. It is our version of subscriber content as we see increasingly across all different kinds of media, be it newspapers or so on. In effect, it is a paywall to a particular kind of club.

Q369       Chair: It sounds exactly the same as OnlyFans in that respect, so it is your version of it, so to speak. Tom Gault, the research that you do into number, diversity and the revenue streams of the influencers/creatives on your platform.

Tom Gault: Some of this is similar to what Iain was discussing. We absolutely work with and partner with creators. We provide them with lots of tools and services so that they can monetise their own content, but also enable them to build relationships with brands directly. Obviously, in circumstances like that, we may not be aware of the commercial relationship that exists because it is something that happens between them and a brand off the platform.

When it comes to numbers around scale, diversity and impact, the research that we have shows that Instagram is of huge benefit for SMEs in particular. Research from a few years ago shows that around one in three SMEs in the UK works with and uses Instagram. A similar number use it to create more jobs as well. That is because people like creators, when they partner with them, give them access to different audiences and enable them to reach customers.

When it comes to diversity, there is an element in which—as Iain mentioned—everybody can use social media, so there will be an element of reflecting society more broadly. Having said that, what we tend to see on Instagram is that it is a place where cultural change happens and where more diverse voices can be elevated. We sometimes partner in that. We have a $25 million fund that supports black creators. We have a diverse voices programme here and in EMEA as well. There is an element in which it will be broadly reflective of society and in which the creator economy is very intrinsically linked to the creative industries as well. That does create some problems around definition.

Q370       Chair: Elizabeth Kanter, the same question: what is your research?

Elizabeth Kanter: Thank you for having us today. We have a similar approach to the other platforms but the different thing with TikTok is that we are very early days in our monetisation journey. We have been a platform for just three years. In 2020 we launched our creative marketplace. We think it is important to listen to our creators when we design these programmes. If we look at our monetisation journey, and the way that we approached our Creator Next programme, we designed that programme to allow our creators and our platform to monetise in a way that best suits them, so we have a Creator Fund. We also have a marketplace where, as Tom was saying, brands and creators can connect.

Importantly, when it comes to areas like diversity of the platform, we need to listen to our creators regarding what they expect from us as a platform and the tools they need to flourish on the platform. I cannot point to any specific research that we have done thus far but what we are doing is taking a creator-led philosophy to our various programmes.

Q371       Chair: You are not sponsoring any form of research into the phenomenon of influencers or creators on your platform at the moment?

Elizabeth Kanter: I cannot point to a specific piece of research at the moment.

Q372       Chair: Iain, just to return to you. Our feeling is that you are probably the most developed along this road when it comes to creators and influencers. You mention the fact that you do slightly more research in the US than you do in the UK at the moment. It is understandable to a degree because it is a more mature market. Do you do anything in the US or in the UK, looking at research into those who use their children as part of their influencing reach?

Iain Bundred: Specifically, the survey I was referring to was in 2021. We launched a creator and artists survey, which included self-ID. We have explored potentially doing that in the UK and other markets while considering the privacy elements. I am not aware that that survey included the point around using children or other family. It is important to stress on this issueand I think that your Committee has looked at some of these questions quite thoughtfullythat in order to be part of the YouTube Partner Programme, which is those creators who monetise on our platform, you have to be over 18. That does not preclude you from using family members or other under-18s in the videos, but it means that monetisation does go to adults only. That is partly to do with our policy and processes around having a higher bar for what we do with monetisation. I do not have any specific data on that.

Q373       Chair: With respect, it sounds like you do absolutely next to nothing about it, because you talk about the age—I mean there are obviously loads of issues over age verification, age assurance and so on, but let’s just park that for a moment. We are talking here—and you would have followed what we have been talking about in the Committee over several months, I am sure—about the way in which people effectively use their children as a means by which to influence and to gain financial reward. I am quite surprised that you, as such a huge institution, a massive companyone of the biggest in the worldare not effectively doing some substantial research in this area and finding out exactly where you could improve not just the offering, but the way in which you can protect children and their experience on your platform. What would you say to that?

Iain Bundred: I apologise if that is the impression I have given. That is certainly not the case. Our child safety policies do not allow content that endangers the emotional or physical wellbeing of children. There are very specific policies in this area. That is nothing to do with monetisation. That applies to any content that is uploaded on YouTube. Obviously all the bits you would expect around sexually explicit content featuring minors and content that sexually exploits minors, also harmful and dangerous acts, misleading family content, and cyber-bullying. There are a lot of work and policies we have in this place. In July to September 2021, we removed almost 2 million videos worldwide for child safety reasons, which is about 40% of the videos we took action on. We are doing a lot in this area.

Q374       Chair: We are not disputing that you have extensive policies in this area. It is whether or not you have the research and the academic grounding to refine those policies going forward. Is this not something that you thinkas a huge global player as part of your overarching group, with a turnover greater than the economy of the Netherlands—you should perhaps take a little bit more to heart and start sponsoring proper research into?

Iain Bundred: We are working with academic partners all the time to better understand how to give young users and young people a positive experience online. I am sure it is part of our work. The likes of the Technology Coalition, our work in the UK with the Samaritans and otherswe are always looking at the issues that we can address in these areas. I am happy to take that point onwards and I will come back if there are any specifics on research into it.

What I want to underscore is the fact that, at the core of what we are trying to do in terms of child safety policies is to protect young people in their experience and exposure online. We have a lot of changes around creators themselves who are between 13 and 17, so they would not be able to monetise but can create videos. We have moved to making video uploads private by default, recognising that users may choose to put videos up online but might rethink it later, so we are trying to encourage them to think before they share.

Q375       Clive Efford: I want to ask about the employment relationship between influencers and your platforms. Influencers are paid proportionately and according to the advertising revenue they attract for the platforms. Does this effectively make them employees? Iain.

Iain Bundred: That is not the case. We have an advertising revenue sharing model. What that means is that, once you reach the higher bar of being eligible for our YouTube Partner Programme, you will be able to have adverts up against your channel and benefit from the majority of that revenue share. The advertising is not a basic thing at that point but our creator studio, which I mentioned earlier, gives you all the information about where your advertising is coming from and where your audiences are so you can start to learn and improve on that.

Q376       Clive Efford: If they are being paid a revenue share on the basis of what they attract to the platform, what is your attitude towards them? Are you treating them as employees? For instance, would they get pay for maternity leave, sick pay or anything like that?

Iain Bundred: No, that is not the case. You have heard evidence about that point around the freelance model that a lot of creators work within. Of course the thing I would say is a large number of YouTube creators are small businesses. We have spoken to this Committee before about the Global Cycling Network, and they employ 240 people. How the people in their studio are remunerated is different. What we do is help fuel that model by sharing the majority of the advertising revenue with the creators.

Q377       Clive Efford: Is that not the same argument that Uber, Pimlico Plumbers and others deployed about their workersthat they were self-employed?

Iain Bundred: I think this is different. I don’t know the details about those individual cases, if I am honest, but the point here is around creators having a voice on our platform and an ability to access the 2 billion-odd users worldwide to find their audiences. It is a very different model; they can upload and choose whether to monetise or not. We are giving them the powers and the tools to find those global audiences and connect them with advertisers. The core of our model is very much around connecting creators with users and advertisers.

Clive Efford: What about TikTok? What would you have to say about those issues?

Elizabeth Kanter: We have looked at this issue as we recognise there is quite a lot of tension around this area at the moment. We share the view that creators are not employees of TikTok. Our legal team has looked at the statutes around this and that is our conclusion: they are not employees.

One of the things that is important for the creative community, and the reason why creators thrive, is the flexibility to take advantage of our creator marketplace, where they can connect with a brand or an advertiser and have a commercial arrangement to make money in that fashion.

We have also seen some of our creators be really innovative in the way they use their success on TikTok to have success elsewhere. For example, we have a creator that is called The Pool Guy. His content is just about creating pools, but he has a whole line of merchandise now that is completely separate to TikTok; he has created his own business on that back of his success from TikTok. In short, we do not believe they are employees but we are continually watching the space to make sure that we provide an environment where the creators can thrive and succeed on our platform.

Clive Efford: Tom, do you have anything to add?

Tom Gault: I would just say that at the heart of Instagram, it is free to use from the creators’ perspective but also from the perspective of those who use the service. What was often dominant, and is still the case to an extent, is that our creators would make money through advertising and branded partnerships. What we have been thinking about is: are there additional ways that we can add revenue streams that they may be able to be in control of themselves? As well as things like advertising and branded content, there are things like badges, which are ways of getting direct payment, and affiliate as wellso commission for selling products too.

Throughout all of those, a lot of it is about the relationship between a creator and brand but also between the creator and the people that they are reaching themselvesthe people who are on social media.

Q378       Clive Efford: Does a system like that encourage people to constantly try to maintain a profile? The pressure is on them to work perhaps excessively and they can seldom take time away for themselves because they constantly have to maintain their presence. Isn’t that method of payment encouraging that?

Tom Gault: We do care deeply about the wellbeing of creators and their long-term experience. Ultimately, we want them to enjoy Instagram and to be in it for the long term. For those reasons, we do a lot of work around not just guides, but events. We have a partnerships team who give advice to creators and where they share peer-to-peer learning. We have an @Creators account, where a lot of that kind of content is shared. The reason for that, as you describe, is that people can feel pressured to maintain their audience. We do not want that to be the case. We want them to be in it for the long term and for them to monetise where they feel that is beneficial to themwhere it may even create an entire business out of the work that they are doing. But if they feel that is not the right thing for them, we want them to find ways of reaching their audience in a way that does not add pressure to them.

Q379       Clive Efford: Ronan, anything to add?

Ronan Costello: I would echo much of what Tom said there. Similarly, we do have a partnerships team, who work directly with the creators on Twitter and share information and advice with them on welfare, safety, wellbeing, and how to escalate situations to us if they need review. Increasinglyand this is more anecdotal rather than anything elseI think there is a culture developing around all these platforms where influencers and creators feel that their communities are not so brittle anymore, and that they can take a break. You will often see people signing off for a weekend, a holiday or something like that, and saying, “I will be back with you, my followers, in several days or a couple of weeks’ time”. I think that culture has become more normalised now, such that creators do not feel the need to be curating their presence online 24/7.

Q380       Clive Efford: Do any of you work with influencer unions at all? Do you recognise them or work with them? I take it from the silence that the answer is no. Would you be open to working with them?

Iain Bundred: From my perspective, I was very interested in the Creator Union presentation that was done in one of the early hearings you had. I want to follow up with them because I think it is interesting to understand how they are supporting creators.

A point other platforms have made is that it is really important that we understand the pressures that creators face. We do a lot of work around some of the surveys I talked about earlier with the Chair, but the more we can do to understand the challenges and work with them collectively would be great.

Tom Gault: We talk to representatives of creators and creators themselves every single day, for similar reasons that Iain mentioned. One other thing that we have been thinking about is: what can we do in-app? We launched something towards the end of last year called Take a Break. That is something that you can set, which are prompts based on session to encourage creators to take a break.

We did that jointly with a number of creators in the US because they recognised themselves that that kind of prompt was effective for them. We found that, of those who turned this on, around 90% are keeping it on because it helps them take stock of what they are doing on social mediathinking about whether or not that is meaningful for them in the long term as well.

We do think things we do in-app are important, as well as guides, advice and working with creator representatives.

Q381       Clive Efford: We have seen evidence where activity on your platforms is the primary source of income for influencers but they are not being paid the living wage. Is there anything you have done to address that issue?

Elizabeth Kanter: I can comment on that. In terms of the evidence the Creator Union gave to the Committee, it did complimentarily cite our TikTok Creator Fundas an innovative way for creators to make money through their content. That is something that we are trying to dosupporting creators by offering them different revenue streams. I do not have evidence about primary sources of income but I want to note that we welcome the comments by the Creator Union that things like the Creator Fund are useful and helpful for creators.

Q382       Clive Efford: I am tempted to ask you if you can provide details of your revenue sharing models accompanying your performance metrics. Could you provide that to the Committee, because it will take some time to go through four sets of that information here? Is that yes that you can?

Just moving on. A recent report from a PR on the MSL found a racial pay gap for US influencers of 35%. Is that consistent with your understanding of the influencer marketplace, and, if so, what are you doing to address the issue?

Tom Gault: Thank you for raising this. It is an issue where lots of people are driving change on Instagram. You may have heard of the @influencerpaygap account, which was set up by creators to anonymously share conditions that they have agreed with brands off the platform, to try to ensure equality and learnings between different individuals around what kind of pay people may get in different circumstances.

This is a tricky one because it does involve relationships between brands and relationships between creators. As I said earlier, overall Instagram is a place where diverse voices are used in branding and as part of partnerships that have come before.

Look at people in the UK, like Charlie Craggs and her pay partnership with Pantene. That is an amazing example of a trans activist setting up on Instagram and on social media, and being able to monetise their work because of who they are and because they have been able to share their story. Those kinds of things are obviously very inspiring examples of what social media is very good for in comparison to perhaps traditional media.

Having said that, this is not an issue that is in any way solved. It is why we have a dedicated fund for black creators, which I mentioned earlier. It is an issue that we expect others to keep looking at and where I am sure the Government will have data around the creative economy more broadly, and how that plays out on social media.

Q383       Clive Efford: Iain, isn’t this something that could just be resolved by having pay transparency so that we could see what people are being paid?

Iain Bundred: Every creator on YouTube has full transparency about their own pay. How they want to share it is a matter for them. As I mentioned to the Chair earlier, we do not currently have data about the individual creators’ ethnicity or religion, for example, but there is absolutely nothing in our remuneration policies that factors in ethnicity or religion.

At the same time, I think we have all recognised, both locally and globally, the challenges that have been raised on this point and whether advertising revenues are paid fairly and equitably. That is why we launched our own Black Voices fund, which is trying to help black creators and musicians to have access to resources to thrive on the platform. It is also why we recognise that we should work to better understand it. That is why we launched that survey in the US and are exploring the ability to launch a survey with privacy concerns here in the UK.

Clive Efford: Elizabeth, do you have any comments on this issue?

Elizabeth Kanter: There is nothing in our community guidelines for moderation standards that would discriminate against any group. That is just a statement I want to make.

Similarly, we have a fund for black creators, we have a Black Trailblazers programme that allocated funding to identify 30 creators on the platform that we could showcase. It is something we did during Black History Month and it is something we do all throughout the year.

Importantly, we also have a black employees employee group, who can help inform our approach to diversity on the platform. We not only have programmes to support black creators on the platform, but we also listen to the voices of our own employees. We engage with the black community on a regular basis to avoid this kind of imbalance in diversity.

For TikTok, diversity is at the core of what makes the platform a success. Programmes like our Black Trailblazers is absolutely critical to us and is something that, as I say, we have allocated funding to and will be committed to both during Black History Month and throughout the year.

Ronan Costello: I would echo much of what my colleagues have said. There is absolutely nothing in Twitter’s community guidelines or in our advertising policies that would impact the remuneration of creators on the platform based on factors such as race, gender and so on. To the extent that influencers work with Twitter and that Twitter connects them with brands, those fees are calculated based on expected metrics that would come to mind, such as a person’s follower account, the engagement rate that they typically get for their content, whether they have curated their content and managed their account in a safe manner and so on.

Much the same as what Liz mentioned there, we also have an employee resource group for black employees at Twitter and other ethnic minorities within Twitter. Twitter Blackbirds represents the view of that community to the company and advises the company on how it can better represent that community, both within the firm and also on the platform.

Q384       Julie Elliott: Hello, everyone. I would like you all to comment on this question, please, but I will come to Tom first. We have talked a lot here about black groups, but influencer talent agencies have said that they struggle to find talent from all underrepresented groups. They have said it is because they are not profiled enough by social media companies.

I am interested to know how platform recommendation algorithms affect diversity and representation in the influencer ecosystem? Also, what are you currently doing to address influencer concerns about racially and gender biased recommendation algorithms and content moderation?

Tom Gault: Thank you for that question. It is not only an incredibly important one when it comes to how we operate but, you are absolutely right, this can get raised by our community, and sometimes there are concerns about shadow banning and other things. It is very important that people know that we are not taking action against particular groups to reduce their reach. We have our set of community guidelines and, if people violate those policies, that content itself will be taken down and it may have an impact on their account as well.

In addition, some people are not aware that we have separate recommendation guidelines and those are even stricter. In those policies, we have lots of content around what we try not to recommend.

Directly on your point around transparency and how we build products that have a fair outcome, first, we built an account centre towards the end of last year. That gives you information of where you may have violated our policies and, therefore, where you may see your reach fall. We try to be clear with people about where that is the case, and that is something that we continue to work on.

Secondly, we are publishing more and more guides and blogs about how the algorithms work in different spaces within the app. We had a number out last year around explore and recommendations.

The final thing I would say is to your point around evaluating this and ensuring fair outcomes. We built something that we have talked about called the fairness flow, which has been led by our responsible AI team. It is a mechanism that our equity team and others use to try to ensure and evaluate whether there are equal outcomes from the things that we are building. Lots of people have talked before about the fact that the data that you put in is very important, but you need to evaluate the outcome of the things that you are doing. That is what that kind of work is focused on.

In addition, when we build products themselves, we have things like an inclusive product council, where people come together to give their views, including their own lived experience, when we are building products at the front end. There are quite a few steps that we take around the overall rules and informing people, but also broader efforts around transparency and ensuring that the products themselves have fair outcomes.

Q385       Julie Elliott: Do you recognise there is still a problem in this area, Tom?

Tom Gault: I recognise absolutely that people have concerns in this area. I hear it directly from people sometimes as do our partnerships team. That is exactly why we worked on account status, why we have been publishing these blogs throughout last year and why there will be more to come. The algorithm and the app is extremely important to people, especially those who are campaigning on cultural change. I talked about trans rights earlier, there are people in the UK who have campaigned, throughout covid in particular, on issues that they feel affect them. They would want to know that their distribution is not being harmed for negative reasons. We want to be clear about how the systems work so that they can stay within the rules but that they can also continue to reach audiences.

Iain Bundred: I think the first thing is to directly address the question around the algorithms. We work very hard to make sure that our systems are not designed to be biased against content belonging to individuals or groups based on either their viewpoints or there diverse backgrounds. A more general point, however, is that we are working hard and are quite proud of how YouTube in the UK is a gateway for diverse voices, not just, as you say, from black communities and others, but also voices around the UKregional voices.

We launched something called YouTube for Creators in November, which is all about trying to find people from different parts of the UK. We feel that we have great examples of people being able to find their voice outside of London, partly because our model is that we have removed that gatekeeper commissioner point that has gone through traditional TV companies. It is really important that we get this right. We have to work and continue to improve the experience of creators from those backgrounds, because they will not necessarily plough on in the way that more entitled groups might.

Elizabeth Kanter: I will first address the algorithm question. As Iain was saying, we are also looking at our algorithm. We try to be transparent about how our algorithm works. We published quite a few posts in our newsroom, most recently in December. One of the things we talked about in that newsroom post is exactly the issue you are talking about, which is: how can we look at our algorithm—and we have teams looking at this—to avoid repetitive patterns and conversely avoiding sending very limited type of content to our user community? That is something we are aware of as a risk in the algorithm.

As I have said before, the diversity of the content that our users receive is the livelihood of TikTok, so a diverse feed is critical to us and we are committed to looking at the algorithm and how we can avoid repetitive patterns and sending limited content to our users. This point is something that our creator teams will talk to agencies about in terms of identifying diverse communities of our creators. As I mentioned before, we have a Black Trailblazer programme. We also have a Pride Trailblazer programme.

However, like any of the other platforms, we understand that diversity of voices is important, and talking to agencies and showcasing diverse creators is critical. I think that is something that we are aware of, and we are always trying to lean into diversity and putting funding behind the particular programmes or showcasing particular groups. That is something we do all throughout the year.

As I mentioned just now, the Pride Trailblazers programme was really about giving our LGBTQ community an opportunity to express themselves, creating hashtags for their content. Their content does very well on our platform. People come to TikTok for this authentic expression and if we find a diverse voice, it will flourish on the platform, but we can always do more in the area. We would be very happy to continue the conversation on that subject.

Ronan Costello: I will address the question on algorithmic bias and how we address that. Last year Twitter established a specific team to look at this issue. It is called the machine learning ethics transparency and accountability teamthe META team. This team is a cross-functional group of engineers, researchers and data scientists, and they collaborate across the company to assess future or current unintentional harms in the algorithms we use to serve content to users on the platform.

They have a couple of different goals. They try to assure transparency and accountability in the decision making, and they try to ensure that the algorithm functions are designed with the consumer choice in mind. When you are on the Twitter app right now you see what we call a sparkle button on the top right-hand corner. If you hit that, you have the instantaneous option to turn off the timeline algorithm and just see content in reverse chronological order without any input from us whatsoever. We want that optionthat consumer choiceto be within the app, on the timeline and prominently placed at all times.

Q386       Julie Elliott: Do you think people know that they can do that?

Ronan Costello: The button is always there at the top of one’s timeline in the app. As I say, you can turn that off when you wish. As you might anticipate, it is particularly useful, say, for example, during a breaking news event where you are not interested in seeing algorithmically surfaced content, and you just want to see the latest updates from the accounts that you follow.

Then that team also looks at equity and fairness of outcomes with respect to how our algorithms surface content to our users. They have been publishing blogs on this. They have been highlighting shortcomings and so on, and addressing those, and they will continue to do that. There is a dedicated team looking at this issue on Twitter.

Q387       John Nicolson: Good morning, everybody. There is a huge gulf between what the social media companies say is users experience and what users think is their experience—in particular, on the question of abuse. We hear it time and time and time again. We have heard numerous representatives of social media companies trying to assure us that they have strict measures in place. Let’s just give one particular example, if we can.

I am going to quote something here and I am going to direct it at Mr Gault. “Get her peroxide fake veneers gone. She’s a fake ‘beep’ and she needs to get gone. Dirty ‘beep’ ‘beep’ herself off, get some respect, you dirty ‘beep’. Look at the state of you. Absolutely waste of organs, stopping plugging you silly, fake ‘beep’. You look plastic, fake as ‘beep’, doll-like and plastic.”

Note this was not one of the Secretary of State’s tweets. This is what Amy Hart said to us when she appeared before the Committee. She is a well-known social media influencer. She said there is just no point in reporting abuse. This was abuse directed at Ms Hart, and she read it out at this Committee. Mr Gault, your company, Instagram, said that what I have read out did not breach your community guidelines. Can you explain to us why it did not?

Tom Gault: Thank you for raising this. The content that you have just described sounds harrowing. I would need to see it in detail and see what happened in terms of what we sent back.

Q388       John Nicolson: I have read it out word for word. I have no reason to disbelieve our witness. She read this out to us at Committee, so it is on the record, and she said she reported this and was told that it did not breach your community guidelines.

I could have presented tweets and posts to all of you because this is the experience people have. You say you have tough guidelines, community guidelines, people’s experiences when they report things—and I can attest to this because I have reported countless tweets in particular, with the most grotesque content, and I keep hearing back from Twitter saying, “This does not breach our community guidelines. I will then send to Twitter their community guidelines. I will copy and paste it. I will say, “This is clearly homophobic content. This breaches your community guidelines and they will write back robotically saying, “This does not breach our community guidelines”. This happens all the time on Twitter.

The reason I am talking to you, Mr Gault, however, is because I raised this so many times with Twitter and I think it is pretty much pointless now. This is the experience of a witness as relayed to this Committee. Once again, why did that very cruel, abusive—some of the words I cannot even read out—comment not breach your guidelines?

Tom Gault: I think there are two things that are very important here. First, we listen to our community and the feedback that they give us. When we hear examples like that and we hear that we may have said it is not violating when that may be the case, we do look at these things in detail. For example, last year we tightened our policies around preventing more kinds of attacks against public figures. In particular, that was focused on degrading sexualised commentarythings that can affect female public figures. That was in part due to the feedback that we received.

In addition to that, though, we need to think about how to stop this happening in the first place. Part of that is around tools. I know the Committee heard from a number of witnesses, some of whom said that the tools we released towards the end of last year are very effective. The reason that people like them is because they stop this thing happening in the first place, even though there may sadly be people in society who want to send horrendous content.

This includes things like the ability to filter things out of direct messaging and comments, and the ability to limit the kind of people who can contact you. That was built very specifically with public figure feedback in mind because sometimes the abuse that people get is from accounts that do not follow them or only recently followed them. People said they did not want to hear from those accounts at certain times.

Overall, as I say, that is all done based on feedback from the community. I am very happy to look at the example that you raised but we use this information to try to see if our policies are working, see if the enforcement of those policies are working but then also think of how we can stop this happening in the first place, including direct conversations and support for the Love Island team.

Q389       John Nicolson: When you heard Amy Hart read this stuff out, nobody from your team—as far as I know—contacted her. I thought she was a very impressive witness because she just detailed her journey. It is probably not the worst abuse that anybody ever gets on social media, but she certainly found it distressing. Some of the things that were said to her were distressing. Did anybody contact her to pick up on her witness statement?

Tom Gault: I understand that with the Love Island team, we have weekly meetings as well as one-to-one sessions with participants in “Love Island”. In part that is to ensure that they not only know how to report things but can use the tools that may be in place. We do have a dedicated partnership team who work with some individuals.

The other thing is that we are testing ways of scaling this. In the US, we have tested live chat on the Facebook side to try to understand how we can deal with specific cases, as well as instances where people can appeal things in the app where they feel we have made the wrong decision.

Q390       John Nicolson: We hear repeatedly from witnesses on this. Helen Wills told the Committee that she and other influencers had repeatedly failed to get sites they call porn for eight-year-olds taken down. They are told that they do not violate community guidelines.

Professor Sonia Livingston mentioned reporting mechanisms and said that they were a particular problem for children. She said, “It is not effective. I report it and nothing happens. It is a recurring theme that you guys do not respond. Of course, that is why legislation is now going to have to constrain and restrict you; your ability to look after users or the community, as you call themI think slightly cynically—no longer works. That is why this is likely to get all-party support, I suspect. That is all for the moment. Thank you, Chair.

Q391       Damian Green: Good morning, all. I will move on to a slightly softer area of support that seems to be lacking, which is the general need for background supportnot financially and not even in terms of a complaint, but just help and advice. We have heard a lot of the creators say that they do not have access to support from the platform. Some of them have said that they do not know anyone on the platform now. They are making money out of you, and you are making money out of them, so there is clearly an economic relationship, but nothing beyond that. What do you do to help your creatives? Elizabeth.

Elizabeth Kanter: We have a creative solutions team that was set up for this exact purposeto look at creators on our platform. We have a lot of creators who are emerging artists or emerging musicians, whatever it may be, and those type of people who have never experienced the notoriety or virality need support from our team in terms of how to monetise and how they engage with advertisers on that journey.

We have a specific team. We have quite a large cross-functional team of colleagues working to support creators who are creating content for the first time via a platform. We take this very seriously. As a second generation platform that is just beginning the monetisation journey and just beginning to understand how creators can use our platform, it is essential that we do that so that the creators can explore and flourish on the platform in a way that best suits each individual creator.

Q392       Damian Green: Does each individual creator have a relationship manager or have access to that?

Elizabeth Kanter: There will be a tier. We have a group of managed creators that have a certain level of followers. For example, to participate in our creative marketplace, a creator would need to have 100,000 followers and be 18 years old. That particular group of creators would have colleagues affiliated with our creative marketplace, which is the branded content aspect of the company. It is a tiered system where, as creators get more and more followers, they will have a particular account manager within the company.

Damian Green: Thank you. Tom, what is Instagram’s version of that, or does it have one?

Tom Gault: Yes. We do have a partnership team who work directly with creators. How we build scale with a lot of this advice and work is a challenge. Some of that is through guides that we promote and send out to people. We had one that I think was called “You have gone viral, now what?”. I think that is some of the softer advice, as well as things around community guidelines—“How do you manage your time?”, How do things work?” and other elements of day-to-day advice, not just around more severe issues although that is in there too.

We are thinking about ways of doing this at scale. We are testing a live chat function on the Facebook side in the US and we are always thinking about what we can do in the app too. It is very important that a lot of these tools and features are self-serve. That is why SMEs have come to rely on social media, especially when they are new to tech. They need to be easy to use. You need to be able to get on there quickly. That is true for creators as well. For those reasons, we try to focus on that at the front end, but where things do go wrong, we are adding more and more abilities to give advice from our side and to get support when you need it.

Iain Bundred: Obviously, we have a slightly different business model where it is all around sharing that revenue and working closely with those creators who are growing on the platform. At the core of it, it is important that we do give creators full information and updates through that dashboard I referenced earlierthe creative studio. The scale thing is approached that way, but we do a lot more. We have a creator academy, which is all about giving advice and setting out online digital tools through our How YouTube works website.

We have tiered partner managers. We call them SPMsstrategic partner managerswho are there to give support to those creators who are growing fast or have very large audiences and therefore will face a higher level of issues.

Other bits we do, we have some peer-to-peer support services and we try to encourage creators to learn from what works. We do work hard to do that because, ultimately, creators are our lifeblood and it is important that they have a positive experience. Think of us as being the stage managers and trying to help them to have a positive play on that global stage.

Ronan Costello: We have a similar content partnership scheme, but one thing I would add that perhaps is distinct or specific to Twitter is we do have the one-stop shop website for people seeking to improve the content they create for the platform. It is called media.twitter.com. At the start of that website, we basically ask you, “What is your field? Are you with entertainments, sports, news, or are you a creator in the photographer, blogger, lifestyle space?Based on that, you can click into one of those categories and the website is laid out in such a way that you read articles as if you are taking short form courses. Those articles can be on best practice, how to stay safe of Twitter, how to report and escalate things to us and so on. Media.twitter.com is our hub for a lot of this information.

Q393       Jane Stevenson: Thank you to all our panellists. I would like to move on to advertising, disclosure and transparency of advertising. In 2020, the Advertising Standards Authority did a three-week disclosure monitoring exercise. They looked at 24,000 Instagram stories. One in four of those stories was marketing, but only just over a third of them35%were labelled as being ads. Could I ask what advertisement disclosure tools you have on your platform and how are you ensuring that your users are making full use of them? Tom, you had your hand up.

Tom Gault: Yes. I thought I should go first given the reference you made. Overall, when it comes to branded contentwhere there has been an exchange of value between a brand and an individual, and they are promoting something in some capacitywe do require people to use our paid partnership tool. There are a number of challenges in that area.

First, the actual commercial relationship exists between the creator and the brand. We do not have any part in that. We are not taking any money from it, so we cannot always know what has happened. An added complexity to that, for example, is that lots of creators do post about brands in a way that can look like they have been paid but they may not have been. They are trying to get the attention of the brand. That can add an element of complexity to it.

We address this in several different ways. First, when you create content and it looks like it might be a paid partnership, we will give you a pop-up, and you have to dismiss it and say it is not a paid partnership before you post. That was rolled out last year and is beginning to have an effect on disclosure.

The second thing I would mention is around our work with the ASA. It is on our consumer policy channel, which is a mechanism by which it can report directly to us where it has concerns about individual pieces of content. They can then report it to us and we can take action on it.

What then happens, which is the final piece, it can raise awareness of the fact that that individual has not been disclosing things. It has a name-and-shame list. That can have a real impact on the brand and the reputation of that individual

Chair: He has frozen.

Tom Gault:adverts on to Instagram to make people aware of what has happened.

Jane Stevenson: Thank you. Sorry, you cut out very briefly towards the end of the answer but I think we still understood. Does anyone else want to come in on that? Elizabeth.

Elizabeth Kanter: I would be very happy to address the question. We have a branded content policy as well, as Tom mentioned for Instagram. What we do is when a creator makes an ad on behalf of a brand, we have a branded content toggle. This automatically populates the particular video with #ad, so that the ad is disclosedthat it is indeed an exchange of value between the advertiser and the brand.

One of the things I thought was interesting was Em Sheldon’s evidence to the Committee about her approach to advertising in terms of undisclosed ads. Her observation was that creators are becoming more and more attached to brands when they have an authentic connection; they feel proud to work with that brand. In her view, it seems that most of the creators that she is aware of in her approach tend to want to disclose their advertising. We hope that that is something that creators will do going forward.

One of the things we do as well is that we partner with the ASA, but in a different way. We partner with them from an educational perspective. Last year the ASA worked with eight or nine of our creators. They created really clever TikTok videos to inform people about the CAP code and how they can comply with this requirement to disclose advertising.

We have also worked with Media Smart to go into schools so that teenagers can recognise advertising on platforms. In addition to our own tools to label videos with ads, we also do that educational element to make creators aware of how they can comply with the CAP code.

Iain Bundred: I want to reiterate the point that of course with YouTube’s advertising revenue sharing model, it means there is not the same sense necessary to need brand partnerships and so on.

One thing that perhaps has not been mentioned so far is to reiterate that this is a mandatory compliance regime. Advertisers are legally required to declare this, as well as to make sure their adverts have been declared. Platforms are putting in place tools and processes. We absolutely do all the pieces to support compliance, but it is on creators and advertisers to declare. It is important that we give them all the support to do it but we should not have this case that people are secretly hiding it. It is just wrong and it is against all policies.

Ronan Costello: I would just addand Liz touched on thisthat it is in the interests of the creators to follow the national laws around this and the platform policies, because if they want to develop these long-term relationships with the brands through which they can make a living, they need to demonstrate this transparency. Otherwise, for reasons of brand safety, a company will not want to work with them.

From that point of view, the influencers and creators know the guidelines in this area and want to follow it to be able to maintain their livelihoods.

Q394       Jane Stevenson: Are the brands taking enough responsibility when they are working with creators? Should they be insisting that a logo, a watermark, or something recognisable and visible, is clearly displayed on content?

Ronan Costello: The brands are perhaps even more conscious of this than the creators. It is the brands spending money on a particular campaign on our platform and they will know that they have a long-term commercial relationship with Twitter, and that Twitter has an ads policy team, and then a team that enforces those ads policies. The last thing a brand would want is for the campaign to be halted or suspended, and its own corporate account to then have strikes against them.

Q395       Jane Stevenson: Thank you. If anyone else wants to come in, I also want to ask if there is more to be done from your end helping people to get greater compliance. I do not know what the stats are now about the number of posts we would find that do not comply. I am also interested in your thoughts on this: do you think the big brands, and the big influencers with lots of followers isn’t where the problem is? Is it with slightly smaller content creators, who might fall through any visible gaps?

Iain Bundred: Big brands do not want these mistakes to happen so there is absolutely a case for doing it. As we are seeing this ecosystem develop, we are seeing increasingly sophisticated intermediaries, influencer marketing agencies, or big advertising firms with specific influencer teams. Prior to YouTube, I worked in Ogilvy. It had a specific team on this and it worked hard to make sure not just that it was finding the right authentic creator to match with the brand, but that the full disclosure was going through. Mistakes will happen but I do not think anyone wants this to be avoided.

There is the example that I think Tom mentioned. Some people will say #Adidas” or something almost to pretend to their audience that they have done it, or perhaps because they are looking for Adidas to come and give them some free stuff. On the smaller creator scene, you might see examples of that. It is really hard for platforms to be aware of any third-party contracts, but what we can get over is to make sure that, when we are made aware of it, we act quickly because it is completely against the law and against all of our rules.

Tom Gault: I have two things to add. Ongoing education will be important here. I would want any creators listening to this or reading evidence to know that 88% of people want to see authentic relationships between creators and brandsthey want to know disclosures—and 72% of young people say they will unfollow a creator who is not honest about what they are putting out there. It is important for the community to know that. We also need to be invested in and part of education efforts, which is going to be ongoing as we work on this challenge because of the fact that some of the relationships are happening elsewhere.

Elizabeth Kanter: I made the point before that education is an important aspect.

Q396       Jane Stevenson: Following on with advertising, there is lots of regulation about the content of advertising. As platforms, how can you do more to help creators comply with advertising regulations, and how do you penalise people who are doing misleading advertising or advertising unsafe productsthe whole range of things not complying with advertising regulations?

Tom Gault: A few things are very important here. As you say, we have a separate set of advertising policies that are stricter than our general community guidelines because what you have in an advert can be pushed out to people by other means, so it needs to be stricter.

We have a system of pre-reviewing that. If you are found to be breaching our advertising policies, you may lose the ability to advertise. We think that is important because that mechanism needs to be there for people seeking not to comply. They will not get that back until they have appealed if we have happened to have made the wrong decision.

The policies themselves have been online for a while, but I think you are rightwe do need to do more on educating people around these things, especially going direct to them where they have breached rules.

We are talking a lot about people being bad actors and maybe trying to do harmful things. We also need to be clear with SMEs what the rules are because very often they do not know. SMEs are vital. They find social media vital for getting out there to reach people. We do try to be as transparent as we can directly to companies as well.

Elizabeth Kanter: I want to take this on and build on it. Where a branded content policy does not allow brands to advertise certain productsI can give you the whole list, which includes crypto, tobacco, alcohol, gambling—and when a branded content is tagged #ad, our monetisation integrity team reviews that ad against our policies. Importantly, our team looks at the creator of the ad itself as well as when an individual clicks through with the landing pages, so we take a two-step approach. The creator can look great but if you click through, you could be led to a scammer, and we want to try to avoid that.

The other is, we were one of the first companiesif not the first companyin the UK to adopt the Financial Conduct Authority’s list of financial services companies that are verified to advertise. We started doing that in September 2020 as another safeguard to ensure that, when we have fieldserver advertisers on our platform, they are approved as companies verified with the FCA. We think that is an important step.

We know that other companies are going to start doing that soon too. Google already does it, but we look forward to other companies following our lead in that space.

Iain Bundred: One thing to say is that it is not just about informing creators. We absolutely do need to work with creators, our partners and standards authorities to make sure that they all understand our obligations. Some of this is pretty straightforward. On YouTube, when you upload a video, you confirm a box about whether the content contains a paid promotion.

The other point is informing the users about what ads do. For users who are under 18, we started explaining where paid promotions might be appearing. We try to inform them. The ASA has a good body called Media Smart, which is trying to educate young people about how advertising appears and what it all means. They are trying to ensure that that hole in their systemthat wider society understand this evolving financial business problem.

Ronan Costello: Similar to the other companies, sitting alongside the Twitter rules governing general behaviour on the platform, there is a whole other pillar of ads policies, governing what you can and cannot advertise—products and services that you can advertise, and those that you cannot.

I will go back to the previous point that the incentive to comply with these policies is very strong. It is stronger for creators than it is for people who do not engage in commercial activity like Twitter, because it can be damaging or impactful enough for an individual who is not an influencer to have their account suspended if they have spent a decade on the platform. It is a matter of livelihood if an influencer’s or creator’s account is at risk of being suspended because they have not complied with the ads policies. There is a team there that enforces this, monitors it, reviews the commercial content and so on.

Jane Stevenson: Thank you. Your answer has been very interesting. It strikes me that there are big challenges about how we maintain a similar approach on all platforms and how we make sure that advertisers are behaving properly.

Q397       Simon Jupp: Good morning to the panel. I want to focus on the impact of social media on children and the platforms that you are all part of in some way, shape or form, and advertising in particular. How do you make sure that children are not advertised inappropriate products—for instance, the type of products that are subject to age verification systems?

Ronan Costello: First, as you might expect, Twitter does not have a significant youth audience in the UK or in any market where we operate. It is very small. There is a paucity of activity directed towards a youth audience on the platform. However, the ads policies that we havereferenced in my previous answerdo govern what products and services can be advertised to all our users. If brands and creators fall foul of those, their accounts will receive enforcement actions, such that it might be one, two and then three strikes, and then they might be forbidden from—

Q398       Simon Jupp: Ronan, sorry to interrupt. That is the process. I understand that. However, if you scroll through Twitter it comes up with “promoted”; that is an advert of some form that appears on your timeline. How do you ensure that someone signed up to your platform who is underagebecause that does happenis not then targeted by inappropriate advertising from products and services that could be subject to age verification?

Ronan Costello: Are you referencing adult content specifically?

Simon Jupp: Not necessarily. It could be all sorts of things: health products. It could be any sort of product that could be subject to age verification.

Ronan Costello: As I said, you must be at least 13 years of age to use the platform, but, as I referenced earlier, we do not have a significant youth audience in the UK on the platform. When I say youth, I mean under-18s.

Q399       Simon Jupp: How do you check that someone is not on your platform without breaking your rules? I hate to challenge you, I do not think that that assertion that you do not have many users who are under 18 is true.

Ronan Costello: We can conduct user research as to the audience on Twitter in the UK, the number of users we have and the particular demographic.

Q400       Simon Jupp: Based on what they tell you to bypass your own rules?

Ronan Costello: As I said, we can conduct research on our user base in the UK—the extent of the user base and the information that they provide to us. Then we serve advertised content within the bounds of our advertising policies that prohibit the promotion of certain products and services. Then we robustly enforce those policies evenly across the influencer community.

Simon Jupp: The same question in relation to inappropriate products that could be seen by children that are subject to age verification systemsTom next.

Tom Gault: As you rightly say, we have stricter policies for people who are under the age of 18. For example, in branded content and advertising, you cannot have adverts relating to weight loss products and cosmetic procedures.

There were some questions there around people who may be over or under 13. You need to be over 13 to use Instagram and there are a number of steps we take to ensure that that is the case.

First, at sign-up there are a number of technical measures that we have. It does not state what date it needs to be, so the individual would most likely put in their genuine age. On a second attempt, if they try to change it, we can fail their attempt and we will not tell them why, so they will not know it is because they put in an age under 13.

After that stage, we have mechanisms in place to ensure that under-13s who may have been trying to lie about their age are not on Instagram. That includes a reporting form that anyone can use, even if you do not have an account, and AI detection. If you are reported for anything, our team of reviewers are trained to look at signals as to whether you may be over or under 13. That includes things like “Happy birthday” in relation to an age that does not match the one that you gave us at sign-up. Those are the things we consider in trying to ensure age assurance.

Finally, we are working more and more on building a safe experience for under-18s. On the organic side, there are lots of things around tools and the parental centre that we have talked about, which we will be launching this year. Plus, last year we changed it so that advertisers cannot target under-18s based on their interests. They can only do that based on their age, gender and location. You cannot be targeted for the types of interest that that advertising might want to select.

Simon Jupp: Thank you. Elizabeth, the same question to you for your platform.

Elizabeth Kanter: There are a few different aspects to this. Broadly speaking, we have a sweep of our policies that require age targeting. I can provide you with a list of that. It would include things like HFSS, or ads related to any sort of body image—those kinds of things. We also do not allow alcohol, tobacco, crypto or get-rich schemes. Broadly, for the product forum, we have policies that just simply ban categories.

The other thing that is unique about TikTok is that we are already regulated under the VSP regime, with Ofcom as our regulator. As part of that complaints process, we are required to take due regard to ensuring that minors do not see certain types of advertising. Those would be the categories requiring age targeting.

We are already under a regulatory obligation to prevent under-18s from seeing these certain types of categories. We also take a policy approach to banning certain categories, and age targeting certain other categories.

Simon Jupp: Finally, Iain.

Iain Bundred: I want to stress that we require users to be 13 or over to use YouTube, unless otherwise enabled by a parent or legal guardian. Part of the launch of YouTube Kids was a recognition that younger kids wanted to have access to certain types of YouTube content, and that is why we created the app. We have also recently introduced a change called supervised experiencesto enable so-called tweenagers—though I hate that term—to have access under parental or guardian oversight.

Regarding advertising, ad personalisation is disabled for users under 18. It is not allowed under YouTube Kids at all, and I mentioned earlier that paid promotions are disabled under YouTube Kids and made-for-kids content.

Q401       Simon Jupp: If I may interrupt, Iain. You can use YouTube without being a signed-up user. You can log on to the website and just view videos. Yes, you are bombarded by advertising from various avenues, but you can do that. What safeguards are in place regarding the idea that someonea kid aged 14—logs on to YouTube to watch a music video of some sort and then is given an advert that is not particularly appropriate for their age. How do you guard against that?

Iain Bundred: Mature content, for example, would only be served to signed-in users. We have a three-step process for age assurance, which is about trying to ensure that we fully understand the age of the user so we can give them the age-appropriate experience.

The first step, as others have said, is that self-declared age. It goes without saying that if they tell us they are 14, they would not be served inappropriate adult content or advertising. The second step is, regardless of what they have said, we actually have signals and age assurance systems that try to

Q402       Simon Jupp: I get all of that. If someone has signed up to an account, fine, lovely, dandy, but if someone hasn’t, they can log on to your website and view content. I am talking about them.

Iain Bundred: The experience of our signed-out users is like that of under-18s. In the first three-quarters of 2021, we moved 7 million accounts globally of users who claimed to be over 13, and 3 million of those were in Q3, so we are ramping up in this area to make sure that we are addressing that concern.

Q403       Simon Jupp: I am conscious of time, but I want to ask one final question. As platforms, how do you ensure that influencersso third parties, not advertisers that come to you directly to advertise on your platform, but those paid by advertisers to do stuff with their profile on your platformare not able to bypass your safeguards and advertise inappropriate products?

Tom Gault: There are a number of steps we take before someone is able to run things in different ways. First, in order to monetarise content, we use our monetarisation features. You have to abide by a series of policies, including us looking at whether you may have broken our community guidelines previously, whether you are posting misinformation or whether you may have had IP violations.

The reason we do that is because we want to ensure that people who are using monetarisation tools have reached that higher bar. Similarly, for advertising, before you run an ad, we have a series of checks before it goes live. Some of it is automated. Some is by human review. It is for the reason you mentioned—that advertising may be pushed to people who do not follow that individual. Therefore, we need to have both the higher bar concerning the policies and also an element of looking at what is in there before it goes live.

Ronan Costello: In the instance of an influencer working directly with Twitterand with the team working directly with influencers, and acting as an intermediary between the brand and influencersTwitter will work with that influencer on the content itself to ensure that it is compliant with our ads policies and with the brand strategies set out by the company, and so on. For those particular campaigns, you would be sure the standards are maintained. Again, as we said, the incentive is there for the influencer not to try to violate the policies, because that would affect their ability to advertise and promote in future.

I would say our processes sound quite similar to the ones described by Tom. Our ads policies apply to all accounts, whether you are an influencer who has been advertising with the platform for years or you are someone who is going to do a one-day campaign on the platform, and you are putting £50 behind it on the dashboard; our policies are the same regardless.

If you have strikes against your account because of previous violations of the rules, either through your organic content because you have been abusive to someone on the platform, or because you fell foul of advertising before, that will impact your capacity to advertise with us in future because it would be unhealthy on the platform for you to promote. Then there are those automated and human review checks, which Tom mentioned as well, that will impact the capacity of the campaign to launch.

Iain Bundred: The first thing is that we want to make sure the appropriate content is being served to users, so if creators are trying to break our systems, that is not allowed. That would be actioned. We set a higher bar for our YouTube Partner Programme. Those who want to monetise through advertising need to have certain points around our ad-friendly content guidelines. We try to educate our creators around this. It is important that they understand. They may not realise certain things, but on things like paid promotions, as I said earlier, it is really simple; it is a disclosure tool. If you lie about that, it is wrong. It should be removed, and that will have consequences for your membership of our monetisation programmes.

Elizabeth Kanter: We have a similar approach regarding branded content, which is creating a relationship with a creator. When a creator engages in that approach to advertising, our toggle is turned on #ad. That triggers a review process to ensure that that creator is complying with our policies. Similarly, on the paid ad content category, where a brand will pay to run an ad on our platform, we have a whole team of colleagues who review those ads to ensure that they comply with our ads policy. That includes, where appropriate, making sure that the ads are 18-plus targetedfor example, if they are HFSS.

That is a policies approach corresponding with the other platforms. To participate in our creator marketplace, you must be 18 years old, have 100,000 followers and your account must be in good standing. We take an in-and-around approach to the different types of advertising we have, but, with all of them, we try to avoid the scenario that you have outlined of users taking advantage of the platform to advertise in a way that violates our policies.

Simon Jupp: Thank you, much appreciated.

Chair: That concludes our first session. Iain Bundred, Ronan Costello, Tom Gault and Elizabeth Kanter, thank you very much for your evidence today.