Joint Committee on the Draft Online Safety Bill
Corrected oral evidence: Consideration of government’s draft Online Safety Bill
Thursday 28 October 2021
3 pm
Watch the meeting: https://parliamentlive.tv/event/index/cf4e4a95-fbc3-4779-8e78-12f46acbadfd
Members present: Damian Collins MP (The Chair); Debbie Abrahams; Lord Clement-Jones; Lord Gilbert of Panteg; Baroness Kidron; Darren Jones MP; Lord Knight of Weymouth; John Nicolson MP; Lord Stevenson of Balmacara; Suzanne Webb MP.
Evidence Session No. 13 Heard in Public Questions 200 - 222
Witnesses
I: Antigone Davis, Global Head of Safety at Facebook; Chris Yiu, Director of Public Policy for Northern Europe at Facebook.
USE OF THE TRANSCRIPT
37
Antigone Davis and Chris Yiu.
Q200 The Chair: Welcome to this further evidence session of the Joint Committee on the draft Online Safety Bill. Today, we will be taking evidence from technology companies. We are grateful to our first panellists from Facebook, Antigone Davis and Chris Yiu, and in particular to Antigone Davis for making an early start to be with us on UK time for this evidence session.
I want to start off with a question to Antigone Davis. Following on from the evidence session you gave in the United States Senate, talking about the documents published by Frances Haugen, particularly in relation to the internal research conducted by Instagram on the experience of younger users that showed that one in five younger users had a negative experience, you said to the Senate in response that “eight out of 10 tell us that they have a neutral or positive experience of our app. We want to make that 10 out of 10”.
What work has been done to understand why young users, particularly young teenage girls, suffer feelings of anxiety and depression as a consequence of using Instagram?
Antigone Davis: Thank you for that important question and for the opportunity to speak to the research.
First, I want to be clear that these surveys of teens were not causal studies. That does not make them unimportant to us. They are very important for product and research direction.
What the survey showed is that in 11 out of 12 particularly challenging issues—things like anxiety, loneliness and social comparison—US teen girls found that it was more helpful than harmful. In one particular area that was not the case. That was in the area of body image. This is an area where we feel that we have a unique opportunity to really help.
Some of the work that we have already done here is not to allow weight loss ads. Other work we are doing is looking at something called “nudges”. One thing that this research showed us is that, in fact, teens find certain kinds of content helpful to them in these instances—uplifting content and content that shows a journey toward recovery. What we are trying to do is look at whether we can nudge them towards that kind of content when they are on the app.
The other thing that teens showed us as being important to them was the ability to control the amount of time that they had on the platform. We have already built some tools for that effect, but we are looking at something called “Take a Break”, where we would encourage them to take a break after a period of time or when they are looking at certain kinds of content.
The Chair: The research also said that people who had a negative experience felt that they could not not use the app. One person might say in response to that research, “Well, if you have an unhappy experience using the app, don’t use it”, but they say they cannot. What research are you doing looking at why people felt that way in the first place? What is causing those feelings of anxiety and depression among younger users? I think the study was done in the UK and the US.
Antigone Davis: One of the reasons why we do this research is to point us in the direction of additional research. In the case of time spent on the app, we have in place a number of things. For example, you can see how much time you have spent on the platform. You can actually turn off notifications so that you are not prompted to go back into the platform to see what is going on.
As I said, we are looking at things like “Take a Break”, but I think there is additional room for research in this area. For example, this year we did 400 peer-reviewed academic pieces of research. We think that there is more that can be done to understand. There are studies that are done by Harvard, Berkeley and Pew that corroborate some of the things that we are seeing. We think there is a real opportunity to do more.
The Chair: You have answered that question on the basis of time spent and “Take a Break”, but presumably for this sort of research it is not just a question of how much time you spend on the app. It is what you spend that time doing and what you are exposed to while you are there. That is why I asked what research you had done. What was it about people’s exposure to the app that made them feel heightened levels of anxiety and depression?
Antigone Davis: Those are areas where we would like to do additional research. I will give you another example of something—
The Chair: I am sorry to interrupt, but just on that point, you said you would like to do extra research. Has research been done on that?
Antigone Davis: There is ongoing research.
The Chair: Ongoing? Has research been done on that in the past and completed?
Antigone Davis: Yes. I was going to offer an example. One of the things that teens have indicated plays a role in social comparison is things like likes. We have actually made it possible for teens to hide likes so that they do not have to see those likes—
The Chair: I am sorry, that is not the point. That is not really what we are talking about. If people are seeing content—particularly young users— because it is being recommended for them, and they are seeing it because it is being promoted because the app thinks that is what they are interested in, have you done research into whether people are being exposed to a type of content that is making them feel anxiety or depression?
Antigone Davis: With regard to specific kinds of content—for example, suicide and self-injury content or eating disorder content—we have policies against that content. Those policies are informed by work that we do with experts in the area. We do not allow, for example, the promotion of suicide and self-injury content. We do allow people to speak about their issues or their concerns and their journey towards recovery. That is because our experts have said to us that finding a place where they can talk about their journey towards recovery can be helpful, whereas things that promote can be harmful. With those experts we have developed policies, and then use AI to remove that content.
The Chair: I appreciate that that is what your policies may be. The question is, where people are experiencing high levels of anxiety and depression, is their experience different? Regardless of what the policies are, are they being exposed to increasing levels of content like that?
It was very striking that, in Frances Haugen’s opinion, when she gave evidence to us, she felt that the more vulnerable you were, the more likely you were to see harmful content because that is the way the recommendation system works.
Antigone Davis: I cannot speak or surmise about that particular issue. What I do know is that we have policies against content that may be problematic for certain people. That is informed by experts, whether it is suicide and self-injury content or eating disorder content. Not only do we have policies to remove the promotion of that kind of content, but we actually put in place things like resources that pop up if people are searching. Take, for example, an individual who may be looking for that content. We will actually pop up resources to direct them towards support and away from that content.
The Chair: It is not so much about people searching for it, but whether it is recommended to them. I understand what your policies are in this area, but have you done research that looks at whether—regardless of what your policies are—young people who feel heightened levels of anxiety and depression after having used Instagram are more likely to have been exposed to content that they may find harmful and upsetting, or may make them feel that way, because it has been recommended to them, and not because of the amount of time they have spent on the app or whether they have searched for it or not, or what your policies are, but just because that has been their experience? Is that something you have researched?
Antigone Davis: With regard to our recommendation policies, we have even stricter rules. It is exactly the kind of work that we are doing with external experts as well as internal work to understand, so that we provide the best experience. I want to be very clear. We have no business incentive and no commercial incentive to provide people with a negative experience. We want to provide them with a positive experience that supports their well-being. That is why we do the research that we do.
The Chair: I am not sure what research you do into that negative experience. We are only having this conversation because these documents were put in the public domain; otherwise, we would be unaware of the sort of research you do. It is not at all clear whether you do any follow-up research in terms of understanding why that is the case and whether your policies or systems need to change. Clearly they do because, by your own admission, in this case one in five users had a bad experience.
Antigone Davis: We share a good deal of our research. I mentioned the 400 peer-reviewed academic studies this year. We also have a research blog where we share our research. We have shared specific research around well-being as well, and we are looking to share more research. One of the things that is a particular challenge in the area of research is how we can provide academics who are doing independent research with access to data really to study these things more deeply.
We are currently working with some of the leading academic institutions to figure out what the right rules are to allow access to data in a privacy-protective way. One thing that we are quite supportive of, in terms of some of the legislation that we are here to talk about today, is working with regulators to set some parameters around that research that would enable that research and would enable people to have trust in the research that is done with access to our data in a privacy-protected way.
The Chair: All of that is done, it would seem to me, at the instigation of Facebook, where Facebook is deciding what people have access to and what it researches. I have to say that our interpretation of the regulatory powers under this Bill is different from yours, in that it is not for Facebook to set parameters around the research that could be required or asked for, or even gathered by the regulator, but that the regulator should have the right to request information, just as the Information Commissioner does.
Antigone Davis: Certainly. I was not suggesting that the regulator would not have the right to request. What I was suggesting was that the regulator could help in determining, for example, when academics want access to the data that we have, the rules that would be put in place to ensure that people feel that their data is being protected and shared in a privacy-protected way.
The Chair: Looking at this individual research document in particular, there has been a lot of criticism from Facebook about the publication of the document. You say that in your view it gives a partial view of the research that is done, but if it had not been published, there would be zero view of this work. Do you think that people have a right to know this sort of information and the sort of research Facebook conducts internally on the user experience of its apps?
I think your sound may be down. We could not quite hear you.
Antigone Davis: I can hear you now. You cut out for a minute. I apologise.
The Chair: I will repeat the question if that is okay. As I said before, we are only having this conversation about the impact of Instagram on young users because a research document was published about it. Facebook has been very critical of the decision to publish that document, saying it does not believe that it gives a full view of the service being provided.
Do you think that people have a right to know this information? Do people have a right to know if Facebook and Instagram are researching negative impacts of users on their apps, and then a right to understand what is being done to mitigate it?
Antigone Davis: We are very committed to providing more transparency around the work that we are doing. We have taken steps to do that. We have a research blog where we share a bunch of our research. We share research in our newsroom as well. We do studies that are published publicly. These are academic studies with other academics that are peer reviewed. We would like to be able to do and share more. We hope that we will be able to have some privacy regulations around access to data on our platform so that people can also do independent work.
One thing I would like to say about this particular research is that a lot of what was shared is not new or particularly revealing compared to other studies, for example, that have been done by Harvard, Berkeley or Pew. We really try to share research that may shed additional light and help people solve these issues.
To give a particular example, in the context of child safety, we have done some work on our own reports to the National Center for Missing and Exploited Children. It has provided some tremendous insights into how we might solve or address the sharing of that content. We intend to do, and would like to do, more of that.
The Chair: What is the process for sharing research information within Facebook as a company? If someone working in one of the integrity or safety teams has seen something that they think is very disturbing, ultimately who is that reported to?
Antigone Davis: It is probably a bit complex to say that there is one single way that this works. We have numerous researchers doing different types of research. We have someone who heads up our research efforts. We work across our different teams. The research informs the work that we do on safety and security. It is designed to ensure that we are making the most impactful changes and that we are putting in place the best policies. It is ongoing and iterative because things change on the platform. This is a very dynamic platform, and we live in a very dynamic society. As societal issues change, we need to adjust and change as well.
The Chair: Ultimately, to what senior level within the company do these decisions go? Some of them would seem quite fundamental. The story that was reported about whether the platform should favour trusted sources of news and information over friends and family content during the US election is a fairly fundamental decision. I cannot imagine it was a decision just taken by the civic integrity team. At what level within the company are some of these quite fundamental decisions made?
Antigone Davis: These decisions are across the company. As the global head of safety, I am working with numerous teams. I am working with our research teams; I am working with our leadership teams in research; I am working with the people who are actually building the technology, as is my team.
The Chair: Is it you? Are you the top person—the final decision-maker?
Antigone Davis: I am the global head of safety. My team works with teams throughout the globe and also across our product teams, our policy teams and our research teams. These decisions are quite collaborative. They involve using data and science to understand and make good decisions and determine what is most impactful. They also involve the work of external experts. We take a multilayered approach, where we have policies and AI that we are using. We give users control. We are also working with experts because there are always areas where we can improve.
The Chair: Where does the buck stop?
Antigone Davis: It is a company filled with experts and we are all working together to make these decisions.
The Chair: I am not really sure who the decision-maker is in that case—whether it is you, the whole company or the whole breadth of people in the company. I think these are things that people want to know. People want to know how much interest Mark Zuckerberg takes in this and how much interest he takes in the kind of research work done by the integrity teams and the safety teams. They want to know how seriously he takes the genuine concerns that this research has caused.
Antigone Davis: Some very good evidence of how seriously we take these issues is reflected in our investment in this area. We have spent $13 billion since 2016. We are on track to spend $5 billion this year. We have 40,000 employees who work on safety and security at Facebook. I think it is very important to understand that we have no business interest—no business interest at all—in providing people with a negative or unsafe experience. Our platform is designed to give people an opportunity to connect. Three million businesses in the UK use our platform to grow their businesses. If they are not safe and they do not feel safe, they are not going to use our platform. We have a fundamental commitment to these issues.
The Chair: You said that the company has spent $13 billion on safety and security over the last five years. How much did Facebook earn during that period of time?
Antigone Davis: How much what during that time?
The Chair: How much did the company earn during that period of time, from 2016 to 2021? Sorry, we cannot hear you.
Antigone Davis: I do not have those exact numbers.
The Chair: I can tell you. Roughly, and this is a slight underestimation, it was about $275 billion during that period of time. The investment is about 4%. Do you think that is enough?
Antigone Davis: What I think is that we invest heavily in providing people with the best experience. If I did not think that we put safety and security at the front of our decisions, I would not be here. I have dedicated my entire career, or the better part of my career, to these issues. I would not be here if I did not think that we take the issue seriously.
The Chair: As I say, it is about 4%. I believe that the total number of people working on it is about 40,000. If the 10,000 people in Europe being hired to work on the metaverse were diverted to safety and integrity, that would be a substantial increase.
That concludes my questions for the time being. I would like to bring in Beeban Kidron.
Q201 Baroness Kidron: Hi, Antigone. Before I ask my question, I just want to go back to something you mentioned. You said that “Take a Break” is associated with stopping people after they have watched too much or a lot of—I am not sure; I do not want to quote that bit—a certain kind of content. What did you mean by “a certain kind of content”?
Antigone Davis: Let me clarify that. We are in the process of developing that. We will be figuring out what makes the most sense to be the most impactful—whether it is the time spent or whether it looks like somebody may be spending time on a singular kind of content. Those things are being sorted through. The idea is essentially to offer people an opportunity to take a break.
You have probably seen in the context of other apps—for example, streaming content for a long period of time—where they may check in to see whether you are still watching. This is really designed to look at where we might be able to prompt people to think about their use of the platform.
Baroness Kidron: I was really struck, when Frances gave evidence here, that she said, several times, that there are many people inside Facebook who want to do the right thing, who are trying to do the right thing and who are good people who do not like the outcome. You mentioned the word “trust” earlier when you were talking to the Chair. Do you think that Facebook is experiencing a loss of trust?
Antigone Davis: What I would say is that we have been very focused, as you may know, on trying to provide additional transparency. For example, we have transparency reports that we put out on a quarterly basis. These reports track our removal of content, the prevalence of content on our platform, and are designed to help us be held to account. It is to hold ourselves to account and for others to hold us to account.
We have also put in place the oversight board to help hold us to account and to review our decisions. We are calling for additional regulation in this area because we think it is important—
Baroness Kidron: I am sorry, Antigone, but my question was really about whether you feel that the company is experiencing a lack of trust. I hear that you are doing all those things.
Antigone Davis: I am not sure that that is exactly the question to be asking me. What I do think is that our interest in regulation is, in part, to give folks like yourselves who are elected by the public an opportunity to hold us to account, because holding us to account helps to build trust.
Baroness Kidron: The reason I wanted to ask that in particular is that you have good people trying to do good things, and you say you are trying to do good things, yet you seem to really struggle to uphold your own terms and conditions. You know, because you and I go back a bit, that we have had research that shows accounts registered to children that are full of self-harm, pro-suicide and pro-ana, the lot, but when we come to you, you say, “We don’t like your methodology, and anyway our policy’s against it”. You have just said that to the Chair: “We have policies against it”. But we have taken evidence from bereaved parents who find their kids’ feeds full of that material.
What I guess I am trying to get at is not really how much you spend, but is the system fit for purpose? Is the material that is ending up in the hands of kids, which is against your policies, being stopped at source? Is Facebook safe? Is the system safe? When we hear from people that it is because of the growth model, the spread model and the weighting of the algorithm, surely that is something we could all work on together to say, where there is growth that is harming kids, we have to stop the growth there. We have to stop the share there. We have to stop the mirroring. We have to start identifying this material. It cannot just be, at the end, “Take a Break”. We do not want our kids taking a break from self-harm. We want your policies to work.
Antigone Davis: To your point, “Take a Break” is one particular issue. We use a multilayered approach. We have AI at scale to try to remove potentially harmful content. We have reviewers who are also in place and create opportunities to report. We work with experts on the ground to fill in those gaps and to identify. We are constantly getting better.
We are a very large platform. Things are going to get through. There are also areas where our AI is 90% effective at finding things before they are reported. We would like to get that to 95% or 98% effectiveness, and we are working in that area.
I agree with you that there is an opportunity for partnership here. We agree that there is an opportunity for a regulator here. Overall, we are quite supportive of the Online Safety Bill. There are pieces of it that we need to work through or share some of our insights in terms of the work that we have done to make it workable, or even more effective, but we are supportive of that kind of partnership with the regulator.
Q202 Baroness Kidron: I have a couple of very specific questions. I am sure you know that we got a letter from parents who struggled to get information about the Instagram feed of their daughter who committed suicide. I understand that it is not possible from the policies of Instagram for them to have access to it.
I really want to understand. I do not want to go through what your policy is, but I want to understand who that is protecting. Who, in the ecosystem, is protected by a bereaved parent not being able to see the Instagram account of a child who has now passed away, when it may give them some sort of closure or some sort of information about what happened to their child? I am thinking about it in terms of both humanity and probate. Anything else that belonged to that child would go to the parents.
Antigone Davis: I know the specific case that you are speaking about. We are working with that particular family. It is extraordinarily hard. Look, I had a brother who died by suicide. One of the particular challenges when someone dies by suicide is trying to understand why. We are working with that family.
We also have privacy obligations to the people who use our platform. One of the things that we have done to try to address this particular kind of circumstance is something called legacy contacts. That enables a certain amount of control and access. This is the kind of work that we are doing, but we are working very specifically with that family.
Q203 Baroness Kidron: I am not sure that I accept that answer 100%, but let us move on to one last thing. It seems that, where material is IP protected, it seems to get much quicker attention. This is not a Facebook question; it is right across the sector. I am interested to know why it is easier for you to identify material that is IP protected than, say, self-harm or pro-suicide material.
Antigone Davis: I work on IP-protected content, and that is not my understanding of how our systems work. I can take back the question to find out the answer specific to that comparison, but I do not have that answer.
Q204 John Nicolson: Thank you for joining us. You said to the Chair that you would like to share much more evidence than you do. What is stopping you?
Antigone Davis: One of the things I talked about quite specifically is how we can provide people with access to data. One of the things that I think is very important in all of this is allowing independent researchers to be able to do work and to look at the impact, aside from us.
John Nicolson: Agreed, so why not let them do more of it?
Antigone Davis: We are working to try to do that. We are working with academic institutions to—
John Nicolson: What does that mean? “We are working towards it” means “We are not doing it now”. Why are you not doing it now?
Antigone Davis: Because there are privacy obligations we have around people’s data. What we would like to be able to do is provide that data to researchers in a way that meets those privacy obligations. Again, this is an area where a regulator like Ofcom could say to us, “Here are the privacy obligations that have to be met”. We could show that we have met those obligations, and that would enable researchers to do the work that they want to do.
John Nicolson: You were clearly frustrated by your experience in the Senate, so I would like to give you an opportunity to go back over some of the comments from a couple of the Senators, at least. You said that the research presented had been mischaracterised. For clarity, for teen girls who reported body image issues, what percentage felt worse because of Instagram?
Antigone Davis: I would have to get you the exact number—
John Nicolson: Well, it is from Facebook’s own head of research.
Antigone Davis: We actually published a blog, which I can get you—
John Nicolson: I have the figure. I was just inviting you; you expressed frustration about the mischaracterisation, so I thought you would have the figures at your fingertips. It is one in three. One in three teen girls feels worse because of Instagram. What about those who feel anxious? Do you know what that figure is?
Antigone Davis: Let me clarify that one in three for a minute. Of teens who said that they had an issue with body image, two out of the three actually said that they found the experience either neutral or—
John Nicolson: Hold on a second. That is clearly designed to distract. Far more important than that is the number of children who feel worse. Those are the ones we should be focusing on, not the ones who feel better. Of course, it is great that some people are feeling better, but it is a very high figure for a third to find that their condition is exacerbated because of Instagram; it is a shameful figure. On the question of anxiety, that is 12%. I am guessing that you will not know the figure for loneliness; 13% of users of Instagram who felt lonely thought it was because of Instagram.
What Instagram is doing is exacerbating children’s problems. You are not coping with that. You should be handing over your research to folk who can do better than you at assessing the damage that Facebook and Instagram are doing.
Antigone Davis: You are correct that somewhere between 12% and 30% of the various people who are identified as having issues have issues. We take those things quite seriously. In the context of one of the issues—let us take eating disorders for a moment, because that is the one with the highest number—we do not serve weight loss ads to teens. We surface resources to teens who may be searching for that content. We work with experts to identify how we can put up proper warning systems, for example, for people, and surface that content, to try not to recommend that content.
If you look at something like suicide and self-ideation—another very serious issue—not only do we surface resources when someone searches for it, but we have actually built in the opportunity, right within our platform when we identify that content, to flag that and to offer—
Q205 John Nicolson: Listen, it is not working because the figures are too high. Whatever you are doing is not working, so let us move on to another area that was identified by Senator Blackburn at the Senate. That is the whole question of human trafficking.
You have taken some action on human trafficking. You took down some offending pages. Do you remember why you took down those pages?
Antigone Davis: We take down those pages because they violate our policies. We have very strict policies against trafficking.
John Nicolson: No; you took down those pages because the App Store said that it would remove you. Apple said that it would remove you from the App Store if you did not do it.
Antigone Davis: Quite the contrary. It is my experience that we take down these pages because they violate our policies. We have invested in AI to identify those pages, and—
John Nicolson: So why did you take them down only after Apple issued you with that threat?
Antigone Davis: Our AI is not perfect. It is something—
John Nicolson: You can say that again.
Antigone Davis: It is something that we are always seeking to improve. People also flag that content for us. When they flag this content for us, we remove it.
John Nicolson: Come on, that was not Apple flagging the content. Apple threatened you with something financially disastrous for you, which was to have Facebook removed from the App Store. That is when you leapt into action. It is a recurring theme with Facebook. You offer to open up more research when the whistleblower opens up and gives us lots of information about your secretive company. You leap into action on human trafficking, but only after you are threatened with removal by the App Store.
At this committee, we have heard evidence of the result of all this negligence. We know about human trafficking to the Middle East and Facebook’s role in it. We know about the stoking of ethnic violence from the Wall Street Journal. Covid misinformation is spreading, with only 10% to 20% being picked up. We know about that from the Facebook whistleblower. At home, here in the UK, the evidence shows that 24 children a week are groomed across your platform.
All this rather suggests that Facebook is an abuse facilitator that reacts only when you are under threat, either from terrible publicity or from companies, for example Apple, that threaten you financially.
Antigone Davis: Quite the contrary. That is not my experience at all at Facebook. For example, in the context of Covid, our services have played an incredibly important role for people who have been isolating, particularly young people who could not go back to their schools or universities, to connect them. We have taken the issue of Covid misinformation incredibly seriously. We are working with the NHS. We are working with the UK Government. We have taken down 20 million posts that violate our policies. When health authorities identify content that is false and poses imminent harm, we remove it. More importantly, we have actually put people in touch with authoritative content in this area. We are running the world’s largest campaign in the area of vaccines for Covid.
I do not agree with you that our actions are entirely responsive. In fact, I think we are incredibly proactive. We have built some of the world’s leading classifiers to identify content that is potentially harmful and remove it.
John Nicolson: Are you glad that Frances came forward as a whistleblower?
Antigone Davis: I really cannot speak to her motivations. I can speak to our fundamental dedication to the issue of safety and security and to the teams that we have built globally, to the teams that we have in every country and to the specific teams that we have—
John Nicolson: The teams you have in every country? You do not have teams in every country. Facebook is bad enough in the English language, but that is where you have most of your content moderation. If you are a kid in most of the world, you can be abused with impunity because there is no content moderation going on at all in most of the world’s languages. You managed to do pirate language on Facebook, but you did not manage to do content moderation in multiple world languages where real children live and get abused without any moderation of content at all.
Antigone Davis: Again, I would disagree with that characterisation. We have language review in 70 different languages—
John Nicolson: And how many languages does Facebook use?
Antigone Davis: We have native speakers who do this because it is important not only to have language coverage but to have context. Where there are places where we may not be able to get context, we have systems that allow people to filter out content based on the things that they are seeing that we may not be able to catch through our own AI.
John Nicolson: There are hundreds of world languages and you moderate content to a tiny degree in only a small number of them. I am afraid that English is the best of them all, and that is pretty grim.
Q206 Debbie Abrahams: Good afternoon, Ms Davis and Mr Yiu, and thank you very much for joining the committee this afternoon. I wonder if you could describe the culture regarding safety that exists in Facebook, and if you are able to give any specific examples. I am conscious of time, so if I move you on after a few sentences, please do not be offended.
Antigone Davis: Thank you for the question. It has been my experience that we have a very rigorous culture when it comes to safety and security. We actually put forward and challenge ourselves to look for gaps. I have specific subject matter experts across women’s safety and across child safety. We have people who have prosecuted, for example, those who exploit children, and people who have dedicated 20 years of their life to supporting domestic abuse survivors, who work across these issues and who challenge the teams to do better. They look for gaps and do analysis with experts externally to find those gaps, to make improvements.
Debbie Abrahams: How is that manifested in a staff handbook or in training? If I may, I will give you the example that was given to us by Frances Haugen earlier this week. She said that she did not know who to escalate concerns to, or how, regarding the posts or groups that she was responsible for.
Antigone Davis: We have internal systems, where people can escalate any particular thing that they might be seeing and get a response. More than that, we have built safety and integrity teams across each of our product teams. For example, we have a central integrity and safety team, but we also have integrity teams in every product. Teams like mine are working both with that central integrity team and with the individual integrity teams. There are numerous ways to escalate a particular concern or draw attention to it at any moment in time, really.
Debbie Abrahams: Is that documented? Would the committee be able to see how you are managing that risk?
Antigone Davis: I am not sure exactly how we would necessarily provide that to you. In fact, in the context of the Online Safety Bill and the regulator, this is an area where it would be great to sort through how we would be able to provide you with the information that you need to do your work. What I can tell you is that it is spread across and through the company. I take a 360-degree view in the company.
Debbie Abrahams: Sophie Zhang, another former Facebook employee, also gave evidence to the committee last week. She reported that, when she alerted one of the vice-presidents responsible for integrity regarding bot accounts and posts promoting the President of Honduras, it took 11 and a half months to be investigated. That does not look like a very robust system where safety and security is a priority for the company, does it?
Antigone Davis: I cannot speak to that specific example, but I know that most of the things that are brought to our attention are managed within 48 hours, for example. I cannot speak to that particular issue, but what I can speak to is a very rigorous process. Every time a product is launched at Facebook, it goes through a rigorous review on privacy, safety and security.
Debbie Abrahams: If it is so rigorous, why has self-regulation failed so miserably?
Antigone Davis: I would disagree with that characterisation. What I do think, though, is that there are some decisions that we make that are very hard decisions. We are dealing with societal issues, particularly in relation to content, for example. It would be very useful to have the people who are elected and represent the public’s interest play a role in sorting those decisions and having industry standards in place that we adhere to. I think that would be very useful for the industry.
Debbie Abrahams: But, Ms Davis, you have just mentioned that you do not understand why it took 11 and a half months. You did not disagree that it took 11 and a half months for an investigation to be completed on quite a time-sensitive issue, which threatened the democracy of a particular country.
Antigone Davis: I cannot speak to that specific thing. What I can say is that we take a very proactive approach. We do our own investigations internally to identify situations and see where we may have gaps. In fact, some of the research that was leaked is actually research on us doing our own work internally to identify gaps, and then to take action to close those gaps. What has been unfortunate in the reporting is that we are missing the story about the actions that we took, which is, hopefully, some of what I can fill in today.
Q207 Debbie Abrahams: Moving on to 6 January this year and the use of Facebook Live to co-ordinate and mobilise people in the US during the insurrection there, what lessons have you learned from previous examples, such as the Honduras one in 2017? What lessons have you learned from the delays in taking action there that were applied on 6 January?
Antigone Davis: Ahead of the US election and ahead of 6 January—years before—we began working to make sure that we had the right systems in place. Numerous elections have taken place across the world where we have not had issues. In the US case, we put in place things like not recommending civic groups, for example. That is something that has continued forward.
When we have heightened situations, as we did then, we actually put in place some measures that make sense for that heightened situation and then we may remove them later. For example, you may be familiar with the use of profile frames. These are things that you put on your profile. People use them for charitable causes, for example, or for important social movements in the States, such as Black Lives Matter. We removed those and made them not available right up to the run-up to 6 January because we were concerned about safety issues on the ground. We have now put them back in place. These are the kinds of things that we are learning, and then distributing that learning across our platform for other areas as well.
Q208 Debbie Abrahams: My final question is in relation to Molly Russell. As you know, Molly took her own life. Her father, Ian, is absolutely convinced of the impact of Instagram in her decision to do that, based on the communications that she left. That is really tragic, but what I would like to understand is what research you are doing to understand where Facebook or Instagram may be implicated in teenagers taking their own lives.
Antigone Davis: It is an incredibly tragic case. Her father is incredibly courageous in his efforts. I cannot speak to that ongoing investigation, except that we are trying to be very helpful.
I think some of the research that you are aware of is research designed to understand how teens, for example, are experiencing our platform, and to put in place better product improvements. We work with a number of suicide and self-injury experts, whom we meet with on an ongoing basis to identify additional things that we can be doing. For example, we have made policy changes over the course of the last few years based on their feedback. We actually provide resources and have warning screens over certain content. We are looking at how we can develop those further. It is ongoing work as we learn more. We are extraordinarily invested and have been industry leading when it comes to suicide and self-injury.
Q209 Baroness Kidron: I noticed in that exchange that you said you have strict policies on human trafficking, and you took off 20 million pieces of material that violated your policy on Covid. I am just wondering whether one of the tools is making directors liable for consistent failure to uphold your own policies. I come back to the point about the amount of self-harm and pro-suicide material that can be found that does not meet your policies. I am interested in whether you think that is the kind of regulation that you would welcome in a way, so that you know what the rules of the road are, if you see what I mean.
Antigone Davis: I do, Baroness Kidron; thank you. We really do welcome a regulator with proportionate and effective enforcement powers. I think criminal liability for directors is a pretty serious step. I am not sure that we need it to take action. We have invested billions of dollars, as I said; $13 billion, and $5 billion is on track for this year. We have developed world-leading AI. It is a very serious step.
I do not know if Chris, my colleague, has anything else to say—
Baroness Kidron: Before we go to Chris, the other thing you have repeatedly said, Antigone—I can see why—is, “It’s not in our interests”, which is why I invited you to talk about trust. It was not an invitation you took up, but you keep on saying, “It’s not in our interests”. Actually, we have had an ex-Facebook employee, a Nobel Peace Prize winner and many NGOs explaining exactly why it is in Facebook’s interest to keep people online, keep people watching and keep people engaging. In that system, there is the spread of material that could actually be slowed down by having a whole lot more stickiness upstream. One of the things that would be sticky would be, perhaps, slightly stricter adherence to your own policies. I am interested in how you can say it is not in your interests when we keep on hearing that it is sort of a business interest to keep everyone there and clicking.
Antigone Davis: We are a business. It is in our interest for people to have a positive experience. I want to go back to your statement that I did not take up your invitation. I think that our interest in regulation is that we see it as an opportunity for us to work together with the regulator on issues that are very connected to societal issues. We welcome that sort of engagement, where you would look to see if we are meeting the standards that are set for us.
Q210 Lord Clement-Jones: Hello, Antigone. I want to come to the algorithms right at the heart of your platform. Frances Haugen gave really powerful evidence to this committee, as indeed she did to the Senate, that Facebook’s algorithms promote harm and incite violence. She talked about the way that extreme messages were amplified by the Facebook algorithms that promoted virality and so on.
It would have been very good to have asked Sir Nick Clegg in person about the quote that I have in front of me when he had a recent interview with ABC News. He said, “If you were just to sort of across the board remove the algorithm, the first thing that would happen is that people would see more, not less, hate speech, more, not less, information, more, not less, harmful content”.
To many of us, that statement comes as quite a surprise. Which algorithms was he talking about removing? Did he quite understand the nature of the algorithms being used? How many algorithms do you use to manage the promotion, moderation and downgrading of content on your platforms? How are they weighted? It seems to me that that is a fairly glib statement about the way these algorithms interact, in particular the way that your algorithms promote extreme content. We will come on to 6 January in a minute.
Antigone Davis: Our algorithm for our newsfeed, which is what you are talking about, takes in thousands and thousands of signals. Some of those signals are exactly designed to demote content. To give you an example of what he is talking about, let us take clickbait. Clickbait is something that sensationalises content through headlines. It is often divisive content. We actually have signals to reduce the distribution of clickbait.
Another example is misinformation, which is something that you mentioned. When a third-party fact-checker labels content as misinformation, that is actually demoted in our newsfeed. If we were to remove the algorithm, all those things could rise to the top. That was his point.
Lord Clement-Jones: Are you saying that it is a complex algorithm or one of several algorithms that are being used?
Antigone Davis: I am referring to a very complex algorithm. That is the algorithm that feeds you content in your newsfeed. We have actually done some interesting work in this area to provide more transparency and control. At the top of the newsfeed, there is actually a filter where you can see what we have identified as your favourites. You can change that by adding things. You can also turn it off and just see things in a chronological order.
Lord Clement-Jones: Would not a much more helpful reply have been to say, “Yes, we could tweak the algorithm so that extreme content was not promoted and, yes, there are many things that we can do to make sure that extreme content does not go viral in that way”, rather than saying, “Oh yes, get rid of the algorithm and then that will let all hell loose”?
Antigone Davis: I cannot speak to the full interview that occurred, but I think the point is that we can, and we do, tweak the algorithm in that way. What he was trying to explain was what the algorithm or your experience might look like if we did not do that.
Lord Clement-Jones: You are saying that it is perfectly possible to tweak and change the algorithm. What have you done since 6 January to do something about the way that algorithms promoted the hate speech involved then and, in a sense, the anti-democratic outpouring on Facebook that occurred, which effectively led to an insurrection on 6 January?
Antigone Davis: I disagree with that characterisation of what happened on 6 January. The individuals who are responsible for 6 January are the individuals who acted against democracy and climbed the walls of our Capitol. We put in place serious measures to try to address those issues. We did that well before 6 January, and we included measures like turning off profile frames and not recommending civic groups. We are working with officials to bring those individuals to justice.
Lord Clement-Jones: Have you done anything since 6 January to tweak the algorithm? Have you acknowledged any duty to change the algorithm?
Antigone Davis: We take our role in all of these societal issues very seriously. We took action before 6 January. We are constantly working on our algorithm to ensure that we are providing people with the best experience, a positive experience, taking in signals to ensure that that is happening. We will continue to do so and will continue to work with experts to ensure that we are doing a good job.
Lord Clement-Jones: Is not the truth of the matter that, even though you can identify what you might call bad content, you are still allowing your algorithms to promote that bad content? We have the report from the Center for Countering Digital Hate, for instance—it’s The Disinformation Dozen report. Does that not illustrate that there is a huge amount of complacency on the part of Facebook?
Antigone Davis: Let me pull apart a couple of things. The study that you are referring to suggested that 12 individuals were responsible for all of the Covid misinformation. That is actually inaccurate. We have taken down millions of pieces of Covid misinformation and, more than that, connected millions of people with accurate information.
I agree that we always want to be doing better. We use AI to address things at scale. We also have human reviewers to try to catch things as well, and work with experts. We always want to do better. In the context of hate speech, we have actually been able to reduce the problems of hate speech on our platform to 0.05%. We want to be able to do that across all kinds of content that may potentially be harmful. That requires ever-improving AI. It requires working with experts to understand things that we may be missing and to build additional signals in our systems.
Lord Clement-Jones: You talk about taking it down, but is not the key battle the algorithmic amplification that is taking place? There is an awful lot of hate speech coming out. You may be taking down a great amount, but there is a huge amount being amplified. Perhaps you could explain the oversight issue. Who makes a decision within Facebook on the tweaking of an algorithm appropriately to address the kinds of issues that you and I are talking about?
Antigone Davis: These are decisions that involve complex teams looking at the signals and understanding numerous thousands of signals. I do not agree that we are amplifying hate. I think we try to take in signals to ensure that we demote content that is divisive, for example, or polarising. In many countries where Facebook is on the rise, we are actually seeing polarisation on the decline. These are societal issues, and on a social platform they are going to play out. We are trying to do our part to address those issues and to ensure that people are having a positive experience on our platform and that they want to come back and utilise our platform to build their business or grow their football club.
Lord Clement-Jones: Yes, but you have not answered my oversight question. Who is effectively making the decision on tweaking those algorithms appropriately, if you so decide?
Antigone Davis: It is a complex team of various people. Some of them work on safety and security. Some of them are working on understanding the kind of content that people connect with, whether it is their family and friends or the groups they are using. It is a complex group of decision-making.
Lord Clement-Jones: But this is business critical. There must be some sign-off involved.
Antigone Davis: We work across a number of different teams. You will see our community integrity team. You will see the people who are developing the product involved. You will see my team involved in weighing out these rather complex decisions.
Lord Clement-Jones: So it is very difficult to allocate responsibility.
Antigone Davis: I would disagree with that characterisation. I take responsibility and my team takes responsibility for trying to drive issues of child safety, women’s safety or LGBTQ communities. I do not think that people take their responsibilities here lightly.
Q211 The Chair: Antigone Davis, I want to check something to make sure I heard it correctly. Did you say that Facebook does not amplify hate?
Antigone Davis: Correct.
The Chair: So Facebook has never recommended hate speech or similar content to its users.
Antigone Davis: I cannot say that we have never recommended something that you might consider hate. What I can say is that we have AI that is designed to identify hate speech. It is ever-developing AI. It has reduced the prevalence of hate speech on our platform to 0.05%. We have zero interest in amplifying hate on our platform and creating a bad experience for people. They will not come back if they do not feel safe. Advertisers will not let it happen either.
The Chair: I am sorry; I just do not believe that. There was a report last year that was published in the Wall Street Journal. I think the study was done a couple of years before. I believe that the study was focused on Germany, but it could have been other countries too. It said that 60% of people who joined extremist groups on Facebook did so on the active recommendation of the platform. Is that not an example of the amplification of hate speech or extremism at some scale?
Antigone Davis: I cannot speak to that specific report, but we have put in place polices that actually apply across—
The Chair: I am sorry to stop you, but what do you mean that you cannot speak to that report? That report was reported around the world. It was based on internal studies and reported in the press last year. I would have thought, given your role, that it is the sort of report you would have been all across. Given what you have said today, I would expect you to have been shocked and appalled at that finding.
Antigone Davis: Let me recharacterise. I do not have that specific report in front of me, but what I can say is that, in relation to hate speech, we work to ensure that we are not promoting content that violates our policies. We have AI to remove hate speech from our platform. We have reduced the prevalence to 0.05% on our platform, and we are continuing to do additional work in that area.
The Chair: I find the 0.05% statistic slightly misleading. That is a global figure, I would imagine, across everything. What we are interested in, because this is where the harm is caused, is how that concentrates in particular areas. There will be areas where there is a huge amount of hate speech, and obviously some of that is removed from the platform. You will be aware of that yourself.
With regard to that study on Facebook recommending extremist groups, I find it disappointing that someone in your position—I appreciate you might not have the report in front of you—is not across those sorts of issues. We constantly hear that you have policies in place, but we want to know, in the context of the Online Safety Bill, and the regulator will want to know, whether those policies are effective enough. Are they good enough? It is not enough to have effective or clear policies. They have to be properly enforced.
You said earlier to John Nicolson on his question about harmful content being directed to younger users that one of the things you do is “try not to recommend” that content. Again, is that good enough?
Antigone Davis: I think this is one of the reasons why we are quite welcoming of both the Bill overall and having a regulator that looks at our systems and the processes that we put in place to make sure that we are doing the job that people think we should be doing. We really welcome that. I can tell you about the prevalence that we have reduced, but we welcome the opportunity to work more with the regulator on assessing our systems and processes and making sure that we are doing the job that you think we should be doing.
The Chair: I hope that the UK regulator, when it is set up, takes more of an interest in some of these reports, in particular the recommendation of extremist content, than some of the people in the company do.
Q212 Darren Jones: Before I start my questions, I should declare that Facebook is a paying member of the Parliamentary Internet, Communications and Technology Forum, which is an all-party parliamentary group that I co-chair. I have worked with Mr Yiu in the past when he worked for Tony Blair.
My first question, Ms Davis, is this: who on the Audit and Risk Oversight Committee do you report to?
Antigone Davis: Pardon me, which committee?
Darren Jones: The Audit and Risk Oversight Committee of the Facebook board. Who do you report to on that committee?
Antigone Davis: On the oversight board?
Darren Jones: No. As a company, Facebook has a committee of the board called the Audit and Risk Oversight Committee. My question is, who on that committee do you report to?
Antigone Davis: I do not personally report to somebody on that board, but I can find out for you the interplay there and come back to you so that you have that information in terms of the company.
Darren Jones: The purpose of my question, Ms Davis, is that the risk oversight committee ought to have a view about the risk of the issues that we have been talking about today, so I am a little surprised that you do not report to it. Does that, therefore, mean that either you do not know, or it does not happen, that papers have not been submitted to that committee’s quarterly meeting summarising the online harms that we have been talking about today in the context of risk to Facebook?
Antigone Davis: We look at risk across our company. As I mentioned earlier, we have specific launch processes in place. I know that, as a company, we make reports to that committee that assess those issues. Every product that we launch goes through a process where we identify privacy, safety and security risks and what we are doing to mitigate them. I know that we make reports on risk to our board in areas where we think we might face challenges.
Darren Jones: But you have never been called to speak to that committee.
Antigone Davis: I personally have not, but people I work with have, and do.
Darren Jones: The reason I ask the question is that I am keen to understand who will be submitting the risk assessment to Ofcom when this Bill becomes law. Would that be you or somebody else?
Antigone Davis: I do not know the details of the Bill, but I think it is something that has not been worked out. Chris, is there a specific answer that you have to that particular question?
Chris Yiu: I think at this stage it is too soon. We need to see the detail in the Bill and the provisions that it brings forward before we make that determination.
Darren Jones: It should be somebody on the Facebook board though, should it not, Ms Davis, being accountable for the risk assessment under this legislation?
Antigone Davis: Again, I do not know the details of how that would be worked out. I think we would want to have someone who feels a sense of responsibility, certainly, providing that.
Q213 Darren Jones: If there are going to be criminal sanctions, I am sure that somebody will want to be responsible.
Moving to my second question, it has been reported that you have different levels of safety teams in different countries, and that you categorise countries based on tiers, with tier 1 countries having so-called war rooms, tier 2 countries having so-called operational centres—I understand that that is about 25 countries altogether—and all the other countries are in tier 3. Which tier is the United Kingdom in?
Antigone Davis: Let me speak to that particular issue. We have global teams throughout the world working on safety and security. For example, we have teams in the UK who work on safety and security with people who speak UK English, which is different from American English, to ensure that we are providing the right kind of context for the work we do.
There are countries where they may be at greater risk of violence because of the particular context on the ground. We have special teams who work uniquely on those particular issues to make sure that we have additional partnerships in place and additional information to ensure that we are taking those societal issues into the decisions we are making. That is a dynamic risk, as the world is a dynamic place.
Darren Jones: Which tier is the United Kingdom in today?
Antigone Davis: I will see if I can get you that specific information.
Darren Jones: That would be useful to know. If we are looking at the resource that Facebook is putting into safety measures in our country, and we are relegated to a tier where we get less resource than other countries, that will be important for how the Bill operates in practice.
Ms Davis, you talked about working with external experts in the research that you undertake. How many of those experts are paid by Facebook, either directly or indirectly by donations to their organisations?
Antigone Davis: I do not know the exact number because there are so many different experts that we work with. Some are paid and some are not. Some are paid for their time. Some are experts we do programmes with; for example, we have a digital literacy programme that we offer around the world for free for educators. We have experts who inform the resources and the lesson plans, and we pay them to build out those lesson plans and bring their expertise. There are other experts we do not fund who provide their expertise. It really depends on the types of projects they are working on.
Darren Jones: You said earlier that you were happy to work with independent third-party researchers, presumably who are not funded by Facebook. Why did you stop working with Laura Edelson in her research and prevent her accessing the data she was using?
Antigone Davis: Is this the New York University research? In that case, the research was being done by data scraping, which is a violation of our terms of service. It violates our privacy terms. We are working with them to try to figure out a way for them to continue what is actually really important research. This is an example of where it would be very useful for us to work with an outside regulator like Ofcom to help set some of those parameters so that people could feel comfortable about that research going forward.
Darren Jones: Presumably, in being comfortable about how data is shared, you will be committing to collaborating with those researchers on the nuances about how that data works within your business, as opposed to just providing data in a form that you are happy with and not having another conversation with them. Is that right?
Antigone Davis: I would think so, yes.
Darren Jones: I think that is probably the right answer. We have talked today about research on horizon scanning for online harms. Do you undertake research that understands the impact of those harms on individuals?
Antigone Davis: We look to a lot of external research to understand the impact of content on individuals and how they may perceive it. Some of the studies that you saw are actually us asking teams, for example, how their interactions are impacting them.
I think that this is an area where there should be additional and deeper research. We invested, for example, in a digital wellness lab, to set that up so that they could do independent research on these issues. This is exactly the kind of research that that data access could provide. I think there is a deeper understanding that needs to happen.
Darren Jones: Thank you for that. Given the provisions in this Bill, how much more do you think Facebook is going to have to invest in its staff and research in order to comply with the requirements?
Antigone Davis: Until we see more detail, it would be very hard to answer that particular question.
Q214 Darren Jones: This is my last question. We have talked today about the complexity of your organisation and how you have safety teams in each product team. You have global teams and very complicated algorithms. There is no clarity yet as to who is actually going to be responsible for submitting the risk assessment, or how that will be reported to your board. You said that you welcome this regulation. Are you sure that you are comfortable with how Facebook is going to be able to do a proper, in-depth risk assessment in good faith and submit it to the regulator? Are you comfortable with that as it is set out in the Bill?
Antigone Davis: I am. There is one particular piece on the risk assessments that I personally would fight from the safety side. It is my understanding that the Bill has risk assessments for every system change or every product change. I think there is a danger that that will slow innovation, even for example in safety and security. I might recommend looking at something that is more periodic, but broadly speaking I think the risk assessments are a valuable tool.
Darren Jones: When you do systems changes or build new products, at the start of that process, do you and your team check in with the product teams and programmers for a safety-by-design check, or do you assess it after the fact?
Antigone Davis: We do it throughout. We are working with them throughout, and in fact there are safety mechanisms that we often test as we develop the product before it launches to test its efficacy.
Darren Jones: That is the type of data I am sure the regulator would be interested to see. It is useful to get that clarification. Those are my questions. Thank you.
Q215 The Chair: Antigone Davis, going back to the work of Laura Edelson and the team at New York University, Laura gave evidence to the committee a few weeks ago. Why are there privacy concerns when people have willingly agreed to take part in this research work and willingly shared information about data relating to why they have been targeted with a particular ad?
Antigone Davis: It is not the willingness issue. It is the scraping of data. As I think you would probably understand, the scraping of data from our platform violates our terms of service and has caused great angst. We would like to make sure that we provide data and access to data in a way that protects the privacy of our users.
The Chair: Indeed. I think we are still waiting to find out who first knew about Cambridge Analytica at Facebook. Maybe one day we will know the details on that. If scraping means that a third party can never analyse ad data about why people are targeted with particular ads, even if the people who were targeted want that research to be done, I think that is problematic. It suggests that really you are just protecting your data and not protecting the privacy of the user.
Antigone Davis: That is probably not an accurate description of scraping.
The Chair: I am not sure that we have learned much there.
Q216 Dean Russell: I would like to explore a little bit about child safety. We spoke earlier about algorithms. We have spoken about the fact that Facebook believes that it does not do harm. We have heard evidence that children are being harmed, not just in the awfulness of suicide content and others but actually in the reaction they have from the addictiveness of Facebook. Does Facebook create content that is designed to be addictive?
Antigone Davis: I would disagree with that assessment. We try to create the best experience. If someone is interested in a particular issue, whether they play a sport or their team is taking on a social campaign around a particular issue, we may try to connect them with other people who are doing that—
Dean Russell: I am sorry to interrupt, but surely that would be quite basic technology. You are talking about very complex algorithms. Are you saying that your algorithms are not optimised to increase people using the platform?
Antigone Davis: In fact, we have taken measures that, at times, reduce the amount of time spent on our platform. When we were building for more meaningful social interactions, we knew that that would reduce the amount of time spent on our platform. It actually reduced the amount of time by about 50 million hours a day. We did that because we had research that would suggest that people’s well-being is better supported by certain kinds of content to have a more positive experience, so we—
Dean Russell: I am sorry to interrupt. I am conscious of time. Have you done research into the impact on things like dopamine levels in children’s brains who are using Facebook extensively?
Antigone Davis: I do not believe that we have done research of that kind. I would suggest that dopamine is a complicated thing to attach to this issue. You get a rise in dopamine when you go running. You get a rise in dopamine when a family member gives you a hug. It is not necessarily the way—
Dean Russell: As an organisation that deals with billions of users, you have not done any research into the psychological or even chemical effect on people using your product?
Antigone Davis: We work with academics externally to look at those kinds of issues. We work with academics to understand how our platform may be affecting somebody’s experience. The surveys that you saw were hearing from teens how it is impacting their experience. We want to provide the best and most positive experience.
For those who may have issues and feel that they do not have control over the time they spend on the platform—I hear from parents all the time, “My child is on it too much”, and adults who feel that they are on their phones too much—what we try to do is take that information and build into our product ways for them to have more control, whether that is understanding the time they spend or providing them with the ability to turn off notifications or take a break. Exactly why we do that research is to provide a better experience where they feel like they have more control.
Dean Russell: Can you categorically say that Facebook has never created an algorithm to try to increase people’s use of Facebook?
Antigone Davis: We certainly want more business. We certainly want people to use our platform. We certainly take into consideration things that they may find more enjoyable. Again, if you are a big football fan and you want to see more football, or be connected to the pages of football teams, we want to be able to bring that to your experience. We try to do that in a way that is positive and not harmful.
Dean Russell: I understand the mantra that it is all about trying to create positive experiences, but we have heard evidence on this committee, and the whistleblower talked about the fact, that hate helps increase activity on the platform, and that the hateful content that does harm has been optimised through the algorithms. Are you saying that never happens, and it is somehow a phantom result, or could it be the fact that Facebook has AI and algorithms that are increasing access to and promoting hateful content to increase usage of the site?
Antigone Davis: What I would say is that that is a wholly simplified version that does not reflect how our algorithms work. Our algorithms are complex. They take into account signals that are designed to remove or demote things that violate our policies—for example, clickbait. Things that might actually be more divisive are reduced in distribution, and things such as misinformation that is marked by a third-party fact-checker are also reduced in our platform. Yes, I think that is a mischaracterisation of how our systems work.
Dean Russell: I appreciate that, but we have had evidence. Of the evidence you have heard and that we have referenced today on this committee, have you read all the evidence and watched the sessions that we had to date on this committee as part of your briefings?
Antigone Davis: I am familiar with the evidence. I have seen the testimony, certainly in the United States. I have looked at the reporting and some of the documents that have been leaked, and there is a missing component to the story. The missing component is the work that we do in response to the different things from the research that we have. It is the work that we do to provide resources on eating disorders. It is the work that we do to give people controls over the time they are spending on the platform. It is the work that we do to provide support for families who are dealing with someone who may have posted suicidal content. All that is missing from the story that has been told.
Dean Russell: What I have heard on this committee today in particular is that, whenever it comes to accountability or responsibility, it is a complex issue, and it is not an individual who takes that responsibility; it is AI’s fault, effectively, if things are not going right. Is it the case, which is what I am wondering as I hear this today, that Facebook has become too big and unwieldy for its own good, and the good of its users, and in fact this Bill is essential to ensure that accountability is held?
Antigone Davis: I just want to take a step back. We take our role here quite, quite seriously. We have invested quite deeply. We care about the safety and security of the people who are using our platform. We know that they will not come back if they are not safe, and if they do not feel safe. We have put in place numerous things to try to provide transparency around these issues, and to try to provide ways to hold us to account and take accountability. We welcome this kind of regulation and a regulator to play a role in that, and to hold us to account, and to look at our systems and processes and see whether we are meeting the bar that is necessary to ensure that the public are well served.
Q217 Dean Russell: I will finish with a couple of questions, if I may. My concern is that, when you have been presented with very strong evidence—evidence given to this committee—about harmful content, about the damage it has done and about issues that have been raised, we are not hearing you saying, “Absolutely, we are going to do more”, and that this regulation is essential to solve it, or that you actually even need the regulation. What we are hearing is that you are doing a great job and you have spent billions of dollars on improving systems. But what we are also hearing is where the massive gaps are and the harm that is doing to young people, in particular. I just wonder where we go from here. Do we need to strengthen this Bill even further to force you to close those loopholes?
Antigone Davis: What you are hearing is a company that takes its role quite seriously. I want to make clear that the documents that you are referring to are our own investigations to do better. We always want to be doing better. Those investigations are designed to identify potential gaps—to allow us to identify them and then build product changes to improve the experience. The research we have done actually points us towards deeper research that needs to be done.
We welcome this Bill. There are parts of the Bill that we are working through with you to make sure that it is workable and effective, such as the risk assessments and how they are done. We think many of these are societal issues and we would like a regulator to play a role.
Dean Russell: Very finally, on accountability, when was the last time that you briefed Mark Zuckerberg on harmful content?
Antigone Davis: I brief our leadership on an ongoing and daily basis. I updated Mark recently in particular on some of these very issues that we are talking about today.
Dean Russell: Thank you.
Q218 Suzanne Webb: Listening to all that I have heard in the last hour or so, I think what you think Facebook is doing is very, very different from what is actually happening. I urge you to listen to and read the reams of evidence and to listen to what your users are telling you, particularly around the amplification of hate, which our Chair mentioned. I am actually quite shocked and deeply concerned about the size of this business and its multilayers. I strongly question its accountability and ability to be accountable.
I add to Darren’s comment about the audit and risk board: it is so important to you as a company that you should have a board in place, and that you are reporting to it. You surely should be having those conversations internally about risk. Importantly for you as a company, you should be having conversations about the financial risk exposure, particularly when we are talking about this online harms Bill that potentially will affect your profit; there is no doubt about that. I would have hoped and thought those conversations would be held.
Would it be possible for you to send us through an organisation chart, so we can actually see how the organisation looks, and the many layers and the reporting lines, and how it actually looks on paper? Then, potentially, we can figure out if it actually works in practice. What I would like to know is how many layers of management there are between you and Mark Zuckerberg.
Antigone Davis: Chris can probably follow up with you on the broader request. I work directly with our leadership team on an ongoing basis. I am the global head of safety at Facebook and I work with all of our leadership quite regularly.
Suzanne Webb: Are you a direct report to Mark?
Antigone Davis: I work across our leadership and I directly report to an individual named Neil Potts.
Suzanne Webb: Does Neil report to Mark?
Antigone Davis: He is part of our leadership team.
Suzanne Webb: Who does Neil report to?
Antigone Davis: Neil reports to Joel Kaplan.
Suzanne Webb: Who does Joel report to?
Antigone Davis: I should—
Suzanne Webb: I could keep asking you these questions, Ms Davis, but you know where I am going with them.
Antigone Davis: It is either Sheryl or Mark. What is important to understand is that, when it comes to safety and security, I am working with our leadership. The VP who heads up our integrity team, the teams that are building the safety and security systems and I are talking on an ongoing regular basis weekly, for sure, if not more.
Suzanne Webb: So you are sending reports to the boss of the boss of the boss of the boss, who then reports to Mark Zuckerberg, ultimately. You can see where we are going with this one. It is so important, all this information and the safety of users, and I think we counted at least eight reporting lines then, even before it gets into the sight of Mark. Is that correct?
Antigone Davis: No, that is not how I would describe it. That is not my experience at all. In fact, as a team, across safety and security, we are talking on an ongoing basis. Whether it is me who is speaking to one individual at a particular time versus Neil, or another person, it is work on an ongoing basis. We take these issues very seriously. We have invested heavily in them and people come to our platform because, by and large, they feel safe to engage with their family and their friends. Do we want to be doing better? Absolutely. Is that because we do not take our responsibility seriously? Absolutely not. We take our role very seriously. We take our accountability very seriously. We care—
Suzanne Webb: I do not doubt any of that, Ms Davis. I am sorry to cut across you, but it is because of time. I do not doubt any of that at all. It would be remiss of a huge corporate company such as yourselves not to feel that way.
My concern is that Facebook is more powerful now than any single news agency or, arguably, any Government for that matter. You have a monopoly on people’s thoughts and beliefs, driven by algorithms and massive profits. We have heard compelling evidence that it is operated for the shareholders’ interests and not in the public interest. Comments have been that there has been unwillingness to sacrifice even a sliver of profit to boost users’ safety. My question to you, Ms Davis, is this: at what point did Facebook lose sight of its moral compass in pursuit of billions of pounds?
Antigone Davis: That is contrary to my experience at Facebook. I would not be here if I did not feel that Facebook takes safety and security seriously. I have dedicated the better part of my life to these issues. I am also a parent who has a child who uses these apps. I think these apps provide tremendous value. Here in the UK, there is a mum who has a group with 20,000 parents who are providing each other with support because their children have mental health issues. There is huge value that our app brings proactively on these issues, and we want to ensure that we are providing the right kinds of systems where people feel safe and secure. It is not my experience at all that you describe.
Suzanne Webb: This is my last question. For me, there was a seismic shift in the pursuit of making Facebook profitable, and that was about 10 years ago when the adverts came in. I get that business model and I understand why you needed to do it. At that point, did users’ online safety receive adequate attention and sufficient focus from the executives? My concern is that it did not have sufficient focus at that time. My deep concern now is that you have shut the stable door after the horse has bolted, or one could say after the algorithm has bolted, and it is now too complex to unpick, particularly due to the size of the company and the other associated tech companies that are part of the organisation.
Secondly, it has been said that there is no will at the top to make sure that these systems are run in an adequately safe way. Damian explored that in his earlier comments. Why is that the case when you consider the risks that content on platforms such as yours pose to people’s mental health and well-being, and the risk to human life? All this should have been spotted at least 10 years ago.
Antigone Davis: We have been working on these issues for a very long period. We have built world-leading AI to deal with this kind of content. More than that, we have not only invested in our own company; we have actually shared our technology with other companies for the broader safety of the internet. We have shared video-matching technology and photo-matching technology so that smaller companies, which may be in the position that we were in 10 years ago, can utilise that technology to ensure the safety of their own platforms.
Suzanne Webb: Do you think the company is so big now and so unwieldy, as Dean mentioned, that you are losing sight? Right at the beginning when I asked the question about layers of management, I would have thought, bearing in mind the absolute seriousness of what we are talking about, where in the UK we are having to consider a safety Bill to protect young children, to protect the vulnerable, and having to put all this in place, that surely a company the size of Facebook should have been on it straightaway with absolute immediacy.
What is even more shocking, Ms Davis, and I am sorry to labour the point, is that you do not seem to have been briefed on the Bill itself and are not able to refer to its contents and parts of it. I would have hoped, coming to such an important session as this, that you would have been. We have the world watching us at the moment. The world is watching to see what we are going to do with this Bill and its implementation. More important than that, we have parents who have lost children due to harmful content. There are people writing in their droves about user safety. We have listened to incredible testimonies. The one that we were going to mention was from the Epilepsy Society, and the need for a Zach’s law because these flashing images are so intense as to cause an epileptic fit. I am deeply, deeply shocked that you are not on top of the brief about what this Bill is all about, and what it means not just to us but to the whole world.
Antigone Davis: Actually, I am familiar with the Bill. Moreover, I think that the UK is in a particular position to be world leading in relation to its Bill. We are pleased to see a Bill that is based on looking at systems and processes, and has a risk assessment system built into it, and a regulator attached to it. There are specifics around the Bill, around publishers and other components of it, that we are working through with you. That is also why my colleague Chris is here, because he is working through those deeply with Members of Parliament and the UK teams in particular.
Suzanne Webb: Surely we do not need a Bill. Surely the companies should just get on and regulate and do the right thing in the first instance. I will leave it there, Chair, thank you.
The Chair: Thank you very much. Jim Knight.
Q219 Lord Knight of Weymouth: My excellent colleagues have covered most things, but I want to delve into advertising. As I understand it, 99% of Facebook’s revenue is generated by advertising, probably about £30 billion a quarter from Europe alone. My understanding is that the Facebook ads algorithm, particularly since it moved advertising into the main newsfeed, now seeks to integrate it into the user experience so that relevance is rewarded by the algorithm. Do you and your team have oversight over harmful content that would be in paid advertising as well as user-generated content?
Antigone Davis: My team works with the teams that build out our ads policies to inform that work and ensure that we do not have ads that promote harmful content. We have technology that we use to try to identify those ads. As in all things, it is never perfect, which is why we also have reviewers and look to outside sources to help us address those things.
Lord Knight of Weymouth: Sorry, are you saying that your team is not the team responsible for the safety of paid advertising?
Antigone Davis: We work with those teams. If you want to have someone who talks very specifically to ads policies, we can certainly provide somebody to provide additional information.
Lord Knight of Weymouth: Do you see a relationship between content that appears in paid advertising and the engagement of users? Clearly, we have a system, as I understand it, where algorithms are deciding how advertising is ranked. We have algorithms deciding what is on our newsfeed as users, on the basis of engagement. Advertisers want high- quality engagement because that gives them more impressions, and much of that advertising is itself placed algorithmically through programmatic advertising. At what point do humans get involved in the oversight of that newsfeed, covering both the advertising and the user-generated content, to make sure that people are not being harmed?
Antigone Davis: We have teams that build out our ads policies and teams that review ads, in addition to the technology. Our safety teams work with those ad policy teams. The ad policy teams also have safety people embedded into their teams to ensure that we are not putting out harmful content.
Lord Knight of Weymouth: Is there any matrix management within Facebook that means that the safety people working in your ads policy teams then have some kind of reporting relationship into you as the global head of safety for Facebook?
Antigone Davis: They work with our team and they report into people who are thinking about safety as well—members of those teams.
Lord Knight of Weymouth: Apologies for my hesitation, but I am trying to get my head around the governance of this. You have heard from various members of the committee our concern around the governance of safety. That is our job: this is the Online Safety Bill. It feels like you have your team looking after and responsible for the safety of users’ content. Suzanne talked about the reporting structure and whether that is direct enough and straight into the CEO, and Darren talked about whether it is significantly taken seriously by the board in terms of its audit function, and now we have, “Ah, yes, but there’s also some other content that users are seeing through paid advertising, and we also do not know in governance terms where that then gets fed into the overall safety of the platform”. Can you help me with that at all?
Antigone Davis: First, we can certainly bring back additional information so that you can understand how this works. We have ads policy teams. Those ads policy teams take in expertise from teams such as my team as well as having experts on safety built into their teams, and they actually work within a broader team that focuses on all the content on our platform. I would be happy to provide you with additional clarity if needed.
Q220 Lord Knight of Weymouth: That would be really helpful. Finally, Facebook, in your evidence and in your written evidence, welcomes this Bill. You welcome that there is a regulator to improve the safety of your platform and the safety of other platforms. Given the relationship and the integrated nature of advertising in the newsfeed of users, do you think that paid-for advertising should be included within the scope of the Bill?
Antigone Davis: The Bill is pretty complex and we would want to make sure that we understand the interplay between other regulation that is in play in the UK to ensure that it is able to accomplish what it sets out to do. I would want to look at that more deeply. Chris, I do not know if you have additional pieces you want to add to that. The Bill is quite complex, so I would want to think that through to make sure that it is effective.
Lord Knight of Weymouth: Chris, have you thought it through?
Chris Yiu: I have. All I would add to what Antigone has said is that, in the UK, as you know, we have established a set of regulatory regimes for advertising. We are aware that the Government are looking at an online advertising programme to ensure that they better understand the role of online platforms in this arena. As Antigone mentioned, we have seen debate from a number of different dimensions about the potential complexity of the Bill. I think it is important that the Bill is brought on to the statute book and that it does the job it is intended to do. We encourage the Government to make sure that it is focused and gets the job done.
Lord Knight of Weymouth: Chris, you may have seen that we took evidence from Martin Lewis, who had to sue Facebook for defamation to get some action in respect of financial scam ads. Does that persuade you at all that this should be in the scope of the Bill so that we can deal with that problem more effectively than people having to sue Facebook?
Chris Yiu: Clearly, fraud and scams are a serious issue. I think we have all experienced the way that has been rising, particularly through the course of the pandemic. It is something that we are determined to address. We have been detecting and taking down fraud on the platform when we see it. We have been acting on the material that has been reported to us. It is an important issue. We have more work to do.
As I said before, in the context of the Bill, I think the important thing is to retain focus. There are a number of areas where we would like to see more detail brought forward, but the important thing is that it is done correctly.
Lord Knight of Weymouth: Thanks. This is my very final question, I promise. Antigone, have you read the Bill?
Antigone Davis: I am familiar with parts of the Bill, but Chris and his team are much more in depth in understanding all of the Bill.
Lord Knight of Weymouth: You are familiar with parts of it but you have not read it and Chris has read it.
Chris Yiu: I have read the Bill. I have read the Explanatory Notes.
Lord Knight of Weymouth: Thank you very much.
Q221 The Chair: Just a few final questions from me before we finish the session. On advertising, Antigone Davis, from a platform policy point of view, if someone posted content on Facebook that might constitute hate speech and would require removal, would Facebook remove an ad that contained the self-same speech if it found that ad? Would the policy be to treat speech in the same way, regardless of whether it is an ad or an organic posting?
Antigone Davis: We actually have some stricter policies in relation to ads, not necessarily less strict policies. In fact, there have been a number of mischaracterisations around the ease of doing ads in certain areas, which are inaccurate. Our policies on ads in many places are stricter.
The Chair: From a platform policy point of view, the issue in the UK is ads that would be considered illegal because they promote things that are in breach of existing consumer protection law, or ads that might seek to promote or glorify hate speech or other harmful content. From what you are saying, you think your ad filters would be better at detecting that than they would organic posting, but from a policy point of view, do you agree that that sort of content should not be on Facebook, regardless of whether it is in an ad or in something someone has posted?
Antigone Davis: I did not speak to whether something is more easily detected in ads versus organic content. I was speaking to the policies that we have in place in relation to ads.
The Chair: Are your policies focused on content—what people are saying and what people are sharing—and do your policies vary depending on whether it is an ad or an organic posting? You said earlier that your standards for advertising are higher, so presumably if an advert was discovered that was in violation of your platform policies, you would remove that ad.
Antigone Davis: If there are ads that get through in violation of our policies, we would remove those ads, yes.
The Chair: In this context, given that one of the jobs of the regulator is to understand what the platform’s policies are, and to ensure that the platform has effective policies in place to execute those policies, you would have no objection to the regulator saying to you that you should apply your platform policies and remove an ad because it contains material that should have been picked up and should not be there.
Antigone Davis: It is my understanding that you have additional or other regulation that is being considered in relation to advertising, so I think it is just a matter of understanding where you would want to place those rules and regulations, and with whom, and it sounds like you are sorting through that.
The Chair: I understand that. I was asking what your policy is now. Would you remove those ads now? If there were ads with content that would be in breach of platform policy if it was posted organically, would you just remove the ads?
Antigone Davis: We have policies, and if it violates our ad policies we will remove that content.
The Chair: Would it be fair to extrapolate from that that, if we recommended you should do that and that the regulator should require the company to do that, you would have no objection because the company does it anyway?
Antigone Davis: We would comply with what regulations are put in place, certainly.
The Chair: What you are saying is that you do that anyway; therefore, we would not be asking you to do anything new. You would get rid of content that violated your platform policies whether it was an ad or whether it was an organic posting.
Antigone Davis: It really depends on the details of the regulation and how they are sorted out in terms of what you are talking about. We remove things that violate our policies, and we will continue to remove things that violate our policies. If stricter standards are put in place in the regulation, we will comply with the regulation.
The Chair: Sure. Just so that I am clear, if there was a piece of organically posted content that was removed for violating your platform policies, if the exact same information was shared in an advert, you would remove that ad as well, or block it.
Antigone Davis: We have policies for our ads and policies for content, but if something violates our policies we will remove it.
The Chair: I just want to work this out so I will ask the question one more time. You said earlier that your ad standards are higher—fair enough. If something was removed as an organic posting because it was a breach of the rules—for hate speech or whatever reason—that would be removed if it appeared as an ad as well, or if someone put money behind that post to boost it and therefore it became an ad.
Antigone Davis: I do not know that there is a one-to-one correlation. What I can say is that, if something violates our policies, whether it is our ads policies for ads, we will remove the ad, or if it is content policy, we will remove the content. In some cases we have stricter policies for ads, but if something violates our policies, we will remove it in relation to the policy.
Q222 The Chair: I hope the Online Safety Bill can add some clarity to that point.
There are one or two final things I want to ask you about. You have mentioned a lot during the course of the afternoon—our time—about Facebook’s use of AI to remove hate speech. Obviously, you will be aware that it was reported in the Wall Street Journal that some Facebook engineers had said that the AI removes only 3% to 5% of hate speech that people have seen on Facebook. Is that correct?
Antigone Davis: I am aware of the article. I am also aware that we have put out a transparency report that indicates that the prevalence of hate speech on our platform has been reduced to 0.05%. We have also submitted the methodology that we used for that transparency report to an independent audit to verify the methodology that we are using.
The Chair: Sorry, is it correct that the AI only finds 3% to 5% of the hate speech that is on Facebook?
Antigone Davis: I am not sure what that particular engineer was speaking to, but what I can tell you is that our transparency report indicates the prevalence of hate speech on our platform and our efforts to thwart it, and we have submitted that methodology to an independent audit.
The Chair: The figures you are quoting are about how much you think there is as part of the total universe of all the content on Facebook. This question is based on something slightly different. You have said all afternoon that you have AI to find hate speech and other harmful content. How much of it does it find? Of the hate speech that is on there, how much does Facebook’s AI find?
Antigone Davis: The numbers that we have are that we have reduced the prevalence on our platform to 0.05%.
The Chair: That is not the question I asked. Of the hate speech that is on Facebook, how much does the AI find? What percentage does it find?
Antigone Davis: I do not have the particular number that I think you are asking, but I can try to follow up on that. I am not sure I entirely understand, but I can follow up to try to get you the information that you are asking for.
The Chair: The question is a really simple one. You have been saying all afternoon that the AI finds hate speech. I am asking you how much of the hate speech that you think is on Facebook the AI finds. It was reported that it was only 3% to 5%. You do not have another figure, it would seem. Are you denying that is accurate? I just want to be clear on your position.
Antigone Davis: I want to be sure that whatever information I provide to you is accurate, so I do not want to surmise on a number that I do not have, but Chris and I will be sure to follow up. What I can tell you is that we have AI that we are using that has reduced the prevalence on our platform of hate speech to 0.05%. Why we have our transparency and accountability reports is to demonstrate our progress and to be held to account in this area. Our AI is not perfect and we are always trying to make it better. That is part of why we have developed these systems for accountability.
The Chair: I think these are exactly the sorts of questions the regulator would want to ask you, because if the AI is reducing prevalence but it is only reducing prevalence at 3% to 5% a year, say, it is not doing it very fast, and it is not very effective, and it would be a fair challenge to Facebook to ask whether you can rely on AI to remove the prevalence of hate speech. That is really what this is about. With respect, I think many people would have thought this is the sort of figure you would have at your fingertips. It is probably the sort of figure you would want to be briefed on every week by your team so that you understand what needs to be done to make the AI more effective.
Antigone Davis: I actually think that a systems-based regulatory process where we can bring this kind of information to you is welcomed. We have numbers that we are sharing publicly and, hopefully, are providing the answers that are needed. This is exactly the kind of detail that we hope will be worked out within the regulation.
The Chair: Perhaps the regulator ought to give the answer to you. There is one other thing I want to ask you about. You mentioned several times with regards to 6 January and the American elections that Facebook had, in the exact phrase you used, not recommended civic groups. Is that correct?
Antigone Davis: We have put in place a policy to no longer recommend civic groups, yes.
The Chair: When was that policy put in place?
Antigone Davis: Ahead of 6 January, and it is still in place now.
The Chair: I will be more specific. Was it put in place before the election—before election day?
Antigone Davis: I will get you the exact date. I do not have the exact date, but it was certainly ahead of 6 January. Numerous things were put in place ahead of the elections, broadly, and some things were removed after the elections. This is what we refer to as break-glass measures: tools that typically would be used in a way that would not pose a particular safety risk, like profile frames.
The Chair: What was removed after polling day, and did it remain removed through to 6 January as well?
Antigone Davis: The civic groups thing is something that we put in place ahead of time. We have actually kept it in place since. There are certain measures that are what we would refer to as break-glass measures. Probably the easiest and best example to explain here is profile frames. It is a frame that you put around your profile that would indicate your support of a particular cause or issue. There are plenty of places where that is used in a really positive way, and we would like to be able to preserve people’s ability to use it in a positive way. In the lead-up to 6 January, we had concerns about it being used in a way that would create further unrest, so we wanted to make sure that we did not offer it during that time period.
The Chair: Thank you very much, and thank you, Chris, for your answers. You have offered on a number of occasions this afternoon to get back to the committee in response to some specific points that we have asked. We will try to provide a summary note of the questions that you offered to answer. Would it be possible to agree now a timeframe for response? Obviously, we have a tight deadline to work to with this report. Would it be possible to get a response within seven days of receipt of that note?
Antigone Davis: Certainly I will do it with all due speed. Depending on what it is, it may take a little more time, but I can commit to trying to do it with all due speed, 100%.
The Chair: We would be grateful if, within seven days, you could either respond with any answers that are immediately available or respond with a timeframe by which you may be able to get us the other information.
Antigone Davis: Sure, we can do that. Definitely.
The Chair: Thank you very much. That concludes our questions to Facebook this afternoon. Thank you.