Text

Description automatically generated

 

Draft Online Safety Bill

Corrected oral evidence: Consideration of governments draft Online Safety Bill

Monday 25 October 2021

2.30 pm

 

Watch the meeting: https://parliamentlive.tv/event/index/cddf75b6-4279-43db-9ac9-9146ef5fe03f

Members present: Damian Collins MP (The Chair); Debbie Abrahams MP; Lord Black of Brentwood; Lord Clement-Jones; Lord Gilbert of Panteg; Baroness Kidron; Darren Jones MP; Lord Knight of Weymouth; John Nicolson MP; Lord Stevenson of Balmacara; Suzanne Webb MP.

Evidence Session No. 11              Heard in Public              Questions 154 192

 

Witness

I: Frances Haugen, former Facebook employee.

 

 

USE OF THE TRANSCRIPT

  1. This is an uncorrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
  2. Any public use of, or reference to, the contents should make clear that neither Members nor witnesses have had the opportunity to correct the record. If in doubt as to the propriety of using the transcript, please contact the Clerk of the Committee.
  3. Members and witnesses are asked to send corrections to the Clerk of the Committee within 14 days of receipt.

49

Examination of witness

Frances Haugen.

Q154       The Chair: Good afternoon. Welcome to this further evidence session of the Joint Committee on the draft Online Safety Bill. Today we are pleased to welcome Frances Haugen to give evidence to the committee. Frances, we are delighted that you have been able to make the trip to London and give evidence to us in person. We also respect the personal decision you have taken to speak out on these matters, with all the risks incumbent in speaking out against a multibillion-dollar corporation.

I would like to ask first about some of the statistics Facebook uses itself to describe its performance. In its transparency reporting it says that it removes 97% of hate speech that it finds on the platform, but the documents that you have published suggest that its AI finds only 2% to 3% of the hate speech that is there. Does that mean that Facebooks transparency reports, without any context to those numbers, are essentially meaningless?

Frances Haugen: There is a pattern of behaviour on Facebook, which is that it is very good at dancing with data. If you read that transparency report, the fraction that it is presenting is not total hate speech caught divided by total hate speech that exists, which is what you might expect given what it said. The fraction it is actually presenting is the stuff that robots got, divided by stuff robots got, plus what humans reported and we took down. It is true that about 97% of what Facebook takes down happens because of its robots, but that is not the question we want answered. The question we want answered is: did you take the hate speech down? The number I have seen is 3% to 5%, but I would not be surprised if there was some variation within the documents.

The Chair: It is a really important point, because essentially what we are looking at is the creation of an independent regulator for the tech sector, and not only do we need to know answers but we need to know what the right questions are, because the official statistics are so misleading.

Frances Haugen: Part of why I came forward is that I have a specific set of expertise. I have worked at four social networks. I am an algorithmic specialist. I worked on search quality at Google and I ran ranking for the home feed for Pinterest. I have an understanding of how AI can unintentionally behave. Facebook never set out to prioritise polarising and divisive content. It just happened to be a side-effect of choices it made.

Part of why I came forward is that I am extremely worried about the condition of our societies, the condition of the global south and the interaction of the choices Facebook has made, and how that plays out more broadly. I am worried specifically about engagement-based ranking. Facebook has said beforeMark Zuckerberg put out a White Paper in 2018—that engagement-based ranking is dangerous unless the AI can take out the bad things. As you saw, however, it is getting 3% to 5% of things like hate speech and 0.8% of violence-inciting content. Engagement-based ranking prioritises that kind of extreme content.

I am deeply concerned about its underinvestment in non-English languages and how it misleads the public into thinking that it is supporting them. Facebook says things like, “We support 50 languages”, when, in reality, most of those languages get a tiny fraction of the safety systems that English gets. Also, and I do not think this is widely known, UK English is sufficiently different that I would be unsurprised if the safety systems that it developed primarily for American English were underenforcing in the UK. Facebook should have to disclose dialectical differences.

I am deeply concerned about the false choices that Facebook presents. It routinely tries to reduce the discussion to things like, “You can either have transparency or privacy. Which do you want to have?”, or, “If you want safety, you have to have censorship”, when in reality it has lots of non-content based choices that would sliver off a half percentage point of growth, or a percentage point of growth. Facebook is unwilling to give up those slivers for our safety.

I came forward now, because now is the most critical time to act. When we see an oil spill, that oil spill does not make it harder for society to regulate oil companies. Right now, the failures of Facebook are making it harder for us to regulate Facebook.

The Chair: On those failures, looking at the way the platform is moderated today, unless there is change, do you think it is more likely that we will see events like the insurrection in Washington on 6 January this year, and more violent acts that have been driven by Facebook systems? Is it more likely that we will see more of those events as things stand today?

Frances Haugen: I have no doubt that the events we are seeing around the world, things like Myanmar and Ethiopia, are the opening chapters, because engagement-based ranking does two things. One, it prioritises and amplifies divisive polarising and extreme content and, two, it concentrates it. If Facebook comes back and says, “Only a tiny sliver of content on our platform is hate, only a tiny sliver is violence, first, it cannot detect it very well so I do not know if I trust those numbers, but, secondly, if it is hyper-concentrated in 5% of the population, and you only need 3% of the population on the streets to have a revolution, that is dangerous.

Q155       The Chair: I want to ask a bit about hyper-concentration, because it is an area that you worked on in particularFacebook groups. I remember being told several years ago by a Facebook executive that the only way you could drive content through the platform was advertising. I think we see that that is not true, and that groups are increasingly used to shape that experience. We talk a lot about the impact of algorithmic-based recommendation tools like News Feed. To what extent are groups shaping the experience for many people on Facebook?

Frances Haugen: Groups play a huge and critical role in driving the experience on Facebook. When I worked on civic misinformation—this is based on recollection; I do not have a documentI believe something like 60% of the content in News Feed was from groups. It is important for this group to know that Facebook has been trying to extend the length of sessions, and get you to consume longer sessions and more content.

The only way it can do that is by multiplying the content that already exists on the platform. The way it does that is with groups and reshares. If I put one post into a half-million person group, it can go out to half a million people, and when combined with engagement-based ranking that group might produce 500 or 1,000 pieces of content a day but only three get delivered. If your algorithm is biased towards extreme, polarising and divisive content, it is like viral variants: those giant groups are producing lots and lots of pieces of content and only the ones most likely to spread are the ones that go out.

The Chair: I think it was reported last year by the Wall Street Journal that 60% of people who joined Facebook groups that shared and promoted extremist content did so at Facebooks active recommendation. Clearly, Facebook is researching this. What action is Facebook taking about groups that share extremist content?

Frances Haugen: I do not know the exact actions that have been taken in the last six months to a year. Actions regarding extremist groups actively being recommended and promoted to users is a thing where Facebook should not be able to just say, “This is a hard problem. We are working on it”. It should have to articulate, “Here is our five-point plan and here is the data that will allow you to hold us accountable”, because Facebook acting in a non-transparent unaccountable way will just lead to more tragedies.

The Chair: Do you think that five-point plan exists?

Frances Haugen: I do not know if it has a five-point plan.

The Chair: Or any plan.

Frances Haugen: I do not know. I did not work on that.

The Chair: To what extent should a UK regulator ask those questions about Facebook groups? From what you are saying they are a significant driver of engagement, and if engagement is part of the problem in the way Facebook designed it, groups must be a big part of that too.

Frances Haugen: Part of what is dangerous about groups is this. We talk sometimes about whether there is an individual problem or a societal problem. One of the things that happens in aggregate is that the algorithms take people who have very mainstream interests and push them towards extreme interests. You can be centre left and you will end up pushed to radical left. You can be centre right and pushed to radical right. You can be looking for healthy recipes and you will get pushed to anorexia content. There are examples in Facebook’s research of all that.

One of the things that happens with groups and with networks of groups is that people see echo chambers that create social norms. If I am in a group that has lots of Covid misinformation, and I see over and over again that people who encourage vaccination get completely pounced upon and torn apart, I learn that certain ideas are acceptable and unacceptable. When that context is about hate, you see a normalisation of hate, a normalisation of dehumanising others, and that is what leads to violent incidents.

The Chair: Many people would say that it should be much easier for the platform to moderate groups, particularly large groupssome of these groups have hundreds of thousands or millions of members in thembecause people are gathering in a common place.

Frances Haugen: I strongly recommend that above a certain size of group it should be required to provide its own moderators and moderate every post. This would naturally, in a content-agnostic way, regulate the impact of large groups. If the group is valuable enough, it will have no trouble recruiting volunteers. A group could be just an amplification pointforeign information operations are using groups like that in virality hacking, the practice of borrowing viral content from other places to build a group—and if you were to launch an advertising campaign with misinformation in it, we at least have a credit card to track you back.

If you want to start a group and invite 1,000 people every dayI think the limit is 2,200 people you can invite every dayyou can build out that group. Your content will land in their newsfeed for a month, and if they engage with any of it, it will be considered a follow. Things like that make them very dangerous and they drive outsized impact on the platform.

The Chair: From what you say, if a bad actor or agency wanted to influence what a group of people on Facebook see, they would probably set up Facebook groups more than they would Facebook pages and run advertising.

Frances Haugen: That is definitely a strategy that is currently used by information operations. Another one that is used, which I think is quite dangerous, is that you can create a new account and within five minutes post into a million-person group. There is no accountability and there is no trace. You can find a group to target any interest you want. It is very fine-grained. Even if you removed microtargeting from ads, people would microtarget via groups.

Q156       The Chair: What do you think the companys strategy is for dealing with this? Changes were made to Facebook groups in 2017 and 2018 to create more of a community experience, I think Mark Zuckerberg said. It was good for engagement, but it would seem similar to changes to the way News Feed works in terms of the content it prefers and favours. These are reforms which the company has put in place that have been good for engagement but terrible for harm.

Frances Haugen: We need to move away from binary choices. There is a huge continuum of options. Coming in and saying, “Hey, groups that are under 1,000 people are wonderful. They create community, they create solidarity, they help people connect”, is one thing, but when you get above a certain size, perhaps 10,000 people, you need to start moderating that group, because that alone naturally rate-limits it. The thing we need to think about is where we add selective friction to the systems so that they are safe in every language and you do not need the AIs to find the bad content.

The Chair: In your experience, is Facebook testing its systems all the time? Does Facebook experiment with the way its systems work around how it can increase engagement? We know that it experimented with the content on News Feed around the election time about the sort of news that should be favoured. How does Facebook work in experimenting with its tools?

Frances Haugen: Facebook is almost continuously running many experiments in parallel on little slices of the data that it has. I am a strong proponent that Facebook should have to publish a feed of all the experiments it is running. It does not need to tell us what the experiment isjust an idea. Even just seeing the results data would allow us to establish patterns of behaviour. The real thing we are seeing is Facebook accepting tiny additions of harm; it weighs off how much harm is worth how much growth for it. Right now, we cannot benchmark and say, “You are running all these experiments. Are you acting in the public good?” If we had that data, we could see patterns of behaviour and see whether or not trends are occurring.

The Chair: You worked in the civic integrity team at Facebook.

Frances Haugen: Yes.

The Chair: If you saw something that was concerning you, who would you report to?

Frances Haugen: This is a huge weak spot. If I drove a bus in the United States, there would be a phone number in my break room that I could call. It would say, “Did you see something that endangered public safety? Call this number. Someone will take you seriously and listen to you in the department for transportation. When I worked on counterespionage I saw things where I was concerned about national security, and I had no idea how to escalate them because I did not have faith in my chain of command at that point. It had dissolved civic integrity. I did not see that it would take that seriously, and we were told just to accept underresourcing.

The Chair: In theory, you would report to your line manager. Would it be up to them whether they chose to escalate that?

Frances Haugen: I flagged repeatedly when I worked on civic integrity that I felt that critical teams were understaffed, and I was told, “At Facebook we accomplish unimaginable things with far fewer resources than anyone would think possible. There is a culture that lionises a start-up ethic that is, in my opinion, irresponsible: the idea that the person who can figure how to move the metric by cutting the most corners is good. The reality is that it does not matter if Facebook is spending $14 billion on safety a year. The real question is whether it should be spending $25 billion, or even £35 billion. Right now there are no incentives internally. If you make a noise saying, “We need more help”, people will not rally round with help because everyone is under water.

The Chair: I think that sort of culture exists in many organisations that ultimately faila culture where there is no external audit and people inside the organisation do not share problems with the people at the top. What do you think people like Mark Zuckerberg know about these things?

Frances Haugen: It is important that all facts are viewed through a lens of interpretation. There is a pattern across a lot of the people who run the company, the senior leaders, where this may be the only job they have ever had. Mark came in when he was 19 and he is still the CEO. For a lot of other VPs or directors, it is the only job they have ever had. The people who have been promoted were the people who could focus on the goals they were given, not necessarily the ones who asked questions about public safety. There is a real thing; people are exposed to data and they say, “Look at all the good we’re doing. Yes, that is true.We didn’t invent hate. We didn’t invent ethic violence”. But that is not the question. The question is: what is Facebook doing to amplify or expand hate; what is it doing to amplify or expand ethnic violence?

The Chair: Facebook did not invent hate, but do you think it is making hate worse.

Frances Haugen: Unquestionably, it is making hate worse.

The Chair: Thank you. Joining us remotely, Jim Knight.

Lord Knight of Weymouth: Thank you, Frances, for coming to talk to us. First, in some of that last fascinating discussion you were having, you talked about calling out for help and not necessarily getting the resource. Would the same be true if you were working in PR or legal within Facebook?

Frances Haugen: I have never worked in PR or communications, so I am not sure. I was shocked to hear recently that Facebook wants to double down on the metaverse and is going to hire 10,000 engineers in Europe to work on that. I was like, Wow, do you know what we could have done with safety if we had had 10,000 more engineers? It would have been amazing.

There is a view inside the company that safety is a cost centre, not a growth centre, which I think is very short-term thinking. Facebooks own research has shown that when people have worse integrity experiences on a site they are less likely to remain. Regulation could actually be good for Facebooks long-term success, because it would force Facebook back to a place where it was more pleasant to be on Facebook, and that could be good for the long-term growth of the company.

Q157       Lord Knight of Weymouth: Thank you. We then go back to the discussion about Facebook groups, by which we are essentially talking about private groups, clearly. If you were asked to be the regulator of a platform like Facebook, how would you get transparency about what is going on in private groups, given that they are private?

Frances Haugen: There is a real bar. We need to have a conversation as a society around this: after a certain number of people have seen something, is it truly private? Is that number 10,000? Is it 25,000? Is it really private at that point? There is an argument that Facebook will make, which is that there might be a sensitive group that someone might post into, and we would not want to share that even if 25,000 people saw it. I think that is more dangerous. If people are lulled into a sense of safety that no one is going to see their hate speech, or no one is going to see a more sensitive thing—maybe they have not come out yetthat is dangerous, because those spaces are not safe. When 100,000 people see something, you do not know who saw it and what they might do.

Both Google and Twitter are radically more transparent than Facebook. Every day, people download the search results on Google and analyse them and people publish papers. Because Google knows that happens, it staffs software engineers who work on search quality to write blog posts. Twitter knows that 10% of all the public tweets end up going out on its firehose, and people analyse those and do things like finding information operation networks. Because Twitter knows someone is watching, it behaves better.

In the case of Facebook, even with private groups there should be some bar above which we say, “Enough people have seen us that it’s not private”. We should have a firehose just like Twitter, because if we want to catch national security threats like information operations, we do not just need the people on Facebook looking at it; we need 10,000 researchers looking at it. In addition, we should have accountability on things like algorithmic bias and understanding whether or not our children are safe.

Q158       Lord Knight of Weymouth: That is really helpful. On algorithmic bias, Twitter published a report on Friday suggesting that there was an algorithmic bias politically. Is that unique to Twitter or would that also be the case with Facebook? Is that implicit in the way these algorithms and the platforms with all their algorithms are designed to optimise clicks? Therefore, is there something about certain types of political content that makes it more extreme that is endemic in all the social media companies?

Frances Haugen: I am not aware of any research that demonstrates a political bias on Facebook. I am familiar with lots of research that talks about the way engagement-based ranking was designed. Facebook calls it meaningful social interactions, although meaningful could have been hate speech or bullying, and until November 2020 it would still be considered meaningful. Let us call it social interaction ranking. I have seen lots of research that says that kind of engagement-based ranking prioritises polarising, extreme divisive content. It does not matter if you are on the left or on the right, it pushes you to the extremes, and it fans hate. Anger and hate is the easiest way to grow on Facebook.

There is something called virality hacking where you figure out all the tricks and how to optimise Facebook. Good actors and good publishers are already publishing all the content they can, but bad actors have an incentive to play the algorithm, and they figure out all the ways to optimise Facebook. The current system is biased towards bad actors and biased towards those who push people to the extremes.

Lord Knight of Weymouth: Currently, we have a draft Bill that is focusing on individual harm rather than societal harm. Given the work that you have done on democracy as part of your work at Facebook, do you think it is a mistake to omit societal harm?

Frances Haugen: I think it is a grave danger to democracy and societies around the world to omit societal harm. A core part of why I came forward was because I looked at the consequences of the choices Facebook was making and at things like the global south. I believe situations like Ethiopia are just part of the opening chapters of a novel that will be horrific to read. We have to care about societal harm, not just for the global south but for our own societies. As I said before, when an oil spill happens, it does not make it harder for us to regulate oil companies, but right now Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI. We have to take advantage of this moment.

Q159       Lord Knight of Weymouth: Thank you. My final question is this: Undoubtedly, just because you are a digital company, you will have looked at user journeys and analysed in a lot of detail the data about how different user journeys work. Is there any relationship between paid-for advertising and some of these dangerous private groups possibly being moved into encrypted messaging? Are there user journeys like that that we should also be concerned about, particularly given that paid-for advertising is currently excluded from the Bill?

Frances Haugen: I am extremely concerned about paid-for advertising being excluded, because engagement-based ranking impacts ads as much as it impacts organic content. I will give you an example. Ads are priced partially based on the likelihood that people like them, reshare them, do other things to interact with them and click through on a link. An ad that gets more engagement is a cheaper ad.

We have seen over and over again in Facebooks research that it is easier to provoke people to anger than to empathy and compassion, so we are literally subsidising hate on these platforms. It is substantially cheaper to run an angry, hateful, divisive ad than it is to run a compassionate, empathetic ad. I think there is a need for things like discussing disclosures of what rates people are paying for ads, having full transparency on the ad stream and understanding what biases come in how ads are targeted.

On user journeys from ads to extreme groups, I do not have documents regarding that, but I can imagine it happening.

Q160       Baroness Kidron: Thank you very much, Frances, for being here and taking a personal risk to be here. We are grateful. I want to ask a number of questions that speak to the fact that this system is entirely engineered for a particular outcome. Perhaps you could start by telling us what Facebook is optimised for.

Frances Haugen: I think the thing that is not necessarily obvious to us as consumers of Facebook is that Facebook is a two-sided marketplace. It is about producers, in addition to being about consumers. You cannot consume content on Facebook without getting somebody to produce it. When Facebook switched over to engagement-based ranking, it said, “The reason we’re doing this is we believe it’s important for people to interact with each other. We don’t want people to mindlessly scroll”.

But a large part of what was disclosed in the documents was that a large factor that motivated the change was that people were producing less content. Facebook has run things called producer-side experiments where it artificially gives people more distribution to see what the impact is on your future behaviour of getting more likes and more reshares, because it knows that if you get those little hits of dopamine, you are more likely to produce more content.

Right now, Facebook has said repeatedly, “Its not in our business interests to optimise for hate”, or, “It’s not in our business interests to give people bad experiences”, but it is in Facebooks interests to make sure that the production content wheel keeps turning, because you will not look at ads if your feed does not keep you on the site. Facebook has accepted the costs of engagement-based ranking because it allows that wheel to keep turning.

Q161       Baroness Kidron: That leads beautifully to my next question. I was struck not so much by the harms, because in a funny way they just gave evidence to what a lot of people have been saying for a long time and what a lot of people have been experiencing. What was super interesting was that again and again the documents showed that Facebook employees were saying, “Oh, you could do this. You could do that”. I think a lot of people do not understand what you could do.

I would really love you to unpack a little for the committee what Facebook employees were saying that we could do about body image issues on Instagram. What were they saying about ethnic violence? What were they saying about the democratic harms that you were just referring to?

Frances Haugen: I have been mischaracterised repeatedly in certain parts of the internet that I am here as a plant to get more censorship. One of the things I saw over and over again in the docs is that there are lots and lots of solutions that do not involve picking good and bad ideas. They are about designing the platform for safety and slowing the platform down. When you focus, when you give people more content from their family and friends, you get less hateful divisive content for free. You get less misinformation. The biggest part that is driving misinformation is the hyper-distribution nodes, the groups where it goes out to 500,000 people.

I will give you an example of non-content-based interventions. Let us imagine that Alice posts something and Bob reshares it and Carol reshares it, and it lands in Dans news feed. If Dan had to copy and paste that to continue to share it, if the share button was greyed out, that is a two-hop reshare chain, and it has the same impact as the entire third party fact-checking system, only it will work in the global south. It does not require us to have a language by language system. It just slows the platform down. Moving to systems that are human scale instead of having AI tell us where to focus is the safest way to design social media.

I want to remind people that we liked social media before we had an algorithmic feed. Facebook said, “If you move to a chronological feed, you won’t like it. It is true that, with groups of 500,000 people that are just spraying content at people, you are not going to like it. Facebook has choices that it could do in different ways. It could have groups that were designed—these things called discord serverswhere it is all chronological, and people break out into different rooms as it gets too crowded. That is a human intervention and a human-scale solution, not an AI-driven solution.

Slowing the platform down, content-agnostic strategies and human-scale solutions are the direction we need to go.

Baroness Kidron: Why does it not do it?

Frances Haugen: In the case of reshares, there are some countries in the world where 35% of all the content in the news feed is a reshare. Facebook does not crack down on reshares, or put friction on them at least, because it does not want to lose that growth. It does not want 1% shorter sessions, because that is also 1% less revenue. Facebook has been unwilling to accept even little slivers of profit being sacrificed for safety, and that is not acceptable.

Baroness Kidron: I want to ask you in particular what a break-glass measure is, if you would tell us.

Frances Haugen: Facebooks current safety strategy is that it knows that engagement-based ranking is dangerous but the AI is going to pick out the bad things. But sometimes the heat in a country gets hotter and hotter. It might be a place like Myanmar, which had no misinformation classifiers, like labelling systems, or hate speech labelling classifier systems, because the language was not spoken by enough people.

They allow the temperature in these countries to get hotter and hotter and hotter, and when the pot starts boiling over it goes, Oh no, we need to break the glass. We need to slow the platform down. Facebook has a strategy that only when a crisis has begun does it slow the platform down, instead of watching as the temperature gets hotter and making the platform safer as that happens. That is what break-glass measures are.

Baroness Kidron: I guess I am asking these questions, because if you could slow it down, make the groups smaller and have break glass as a norm rather than in an emergency, these are all safety by design strategies. They are all just saying, “Make your product fit for purpose. Could those be mandatory in the Bill?

Frances Haugen: Facebook right now has characterised the reason why it turned off the break-glass measures after the US 2020 election as being that it does not believe in censorship. These measures largely have nothing to do with content. They were questions about how much you amplify live video. Do you go to a 600x multiplier or a 60x multiplier? They are little questions where Facebook optimised its settings for growth over safety.

There is a real thing where we need to think about safety by design first. Facebook has to demonstrate that it has assessed the risks and it must be mandated to assess the risks, and we need to specify how good that risk assessment is, because Facebook will give you a bad one if it can. We need to mandate that Facebook has to articulate solutions that it is not articulating with a five-point plan to solve these things.

Q162       Baroness Kidron: I also want to raise the issue of whitelisting. A lot of the Bill talks about terms and conditions being very clear, and then upholding terms and conditions and having a regulatory relationship to upholding them. But what about whitelisting, where some people are exempt from terms and conditions? Can you give us your view on that?

Frances Haugen: For those who are not familiar with the reporting by the Wall Street Journal there is a programme called XCheck. This is a system where about 5 million people around the world—maybe 5.7 millionwere given special privileges that allowed them to skip the line, if you will, for safety systems. The majority of safety systems inside Facebook did not have enough staffing to manually review. Facebook claimed that it was just about a second check, about making sure that the rules were applied correctly, and because Facebook was unwilling to invest in enough people to do that second check, they just let people through.

There is a real thing about having more avenues to understand what is going on inside the company. Imagine if Facebook was required, for example, to publish its research on a one-year lag. If it has tens of billions of dollars of profit, it can afford to solve problems on a one-year lag. We should be able to know that systems like this exist. No one knew how bad the system was because Facebook lied to its own oversight board about it.

Q163       Baroness Kidron: The last area I want to think about is the fact that all the documents you bring come from Facebook, but we cannot really regulate for this company in this moment. We have to look at the sector as a whole and we have to look to the future. Do you have any advice on that? We are not trying to kill Facebook; we are trying to make the digital world better and safer for its users.

Frances Haugen: Engagement-based ranking is a problem across all sites. It is easier to provoke humans to anger. Engagement-based ranking figures out our vulnerabilities and panders to those things. Having mandatory risk assessments and mandatory remediation strategiesways to hold these companies accountableis critical. Companies will evolve and they will figure out how to side-step things, and we need to make sure that we have a process that is flexible and can evolve with the companies over time.

Q164       Baroness Kidron: Finally, do you think that the scope of the Bill in its user to user and search is a wise move, or should we be looking for some systemic solutions for the sector more broadly?

Frances Haugen: User to user and search? It is a great question. I think that for any platform that has a reach of more than a couple of million people, the public have a right to understand how that is impacting society. We are entering an age when technology is accelerating faster and faster. Democratic processes take time, if they are done well. We need to be able to think about how we will know when the next danger is looming. For example, in my case, because Facebook is a public company, I could file with the SEC whistleblower protections. If I worked at TikTok, which is growing very fast, that is a private company, and I would not have had any avenue to be a whistleblower.

There is a real thing that, for any tech company with a large societal impact, we need to be thinking about how we get data out of that company. For example, you cannot take a college class today to understand the integrity systems inside Facebook. The only people who understand them are people inside Facebook. Thinking systematically for large tech companies, how we get the information that we need to make the decisions is vital.

Baroness Kidron: Thank you so much.

The Chair: You mentioned the oversight board, and I know you are going to meet the oversight board. They themselves do not have access to the sort of information you have been publishing or the information you have been discussing. Do you think the oversight board should insist on that transparency, or disband itself?

Frances Haugen: I always reject binary choices. I am not an A or B person; I love C and D. There is a great opportunity for the oversight board to experiment with its bounds. This is a defining moment for the oversight board: what relationship does it want to have with Facebook? I hope the oversight board takes this moment to step up and demand a relationship that has more transparency, because it should ask the question: why was Facebook able to lie to it in this way, and what enabled that? If Facebook can come in and actively mislead the oversight board, which is what it did, I do not know what the purpose of the oversight board is.

The Chair: It is more of a hindsight board than an oversight board, is it not?

Q165       Lord Clement-Jones: Frances, hello. You have been very eloquent about the impact of the algorithm. You have talked about ranking, pushing extreme content and the amplification of that sort of content. I think you used the phrase addiction driver.

This question follows on from talking about the oversight board, or a regulator here, or indeed trying to construct a safety by design regime. What do we need to know about the algorithm, and how do we get that, basically? Should it be about the output of an algorithm, or should we be inspecting the entrails for a code? When we talk about transparency, it is very easy just to say that we need to be much more transparent about the operation of these algorithms, but what does that really mean?

Frances Haugen: It is always important to think about Facebook as a concert of algorithms. There are many different algorithmic systems and they work in different ways. Some are amplification systems, some are down-regulation systems. Understanding how all those parts work and how they work together is important.

I will give you an example. Facebook has said that engagement-based ranking is dangerous unless you have AI that can pick out the extreme content. Facebook has never published which languages are supported and which integrity systems are supported in those languages. Because of that, it is actively misleading the speakers of most large languages in the world by saying, We support 50 languages, when most of those countries have a fraction of the safety systems that English has. When we ask how the algorithm works, we need to be thinking about what the experience of the algorithm is for lots of individual populations. The experience of Facebook’s News Feed algorithm in a place that does not have integrity systems is very different from, say, the experience in Menlo Park.

Some of the things that need to happen are ways of doing privacy-sensitive disclosures of what we call segmentation. Imagine that you divided the United States into 600 communities based on what pages and groups people interact with and their interests. You do not need to say, “This group is 35 to 40 year-old white women who live in the south. You can have a number on that cluster, but understand that some groups are disproportionately getting Covid misinfo. Right now, 4% of those segments are getting 80% of all the misinfo. We did not know that until my disclosure.

For hate speech and violence incitement it is the same way. When we say, “Do we understand the algorithm?”, we should really be asking whether we understand the experiences of the algorithm. If Facebook gives you aggregate data, it will likely hide how dangerous the systems are, because the experience of the 95th percentile for every single integrity harm is radically different, and the 99th percentile is even more radically different, from the median experience.

I want to be really clear. The people who commit acts of violence are people who get hyper-exposed to dangerous content. We need to be able to break out by those extreme experiences.

Lord Clement-Jones: That is really interesting. Do you think that is practical for Facebook to produce? Would it need to have further research, or does it have ready access to that kind of information?

Frances Haugen: You could produce that information today. The segmentation systems exist. That was one of the projects that I founded when I was at Facebook. That segmentation has been used since for different problem areas, such as Covid misinformation, and it already produces many integrity statistics.

It is extremely important that Facebook should have to publish which integrity systems exist and in which languages. Right now, let us imagine that we are looking at self-harm content for teenagers. Let us imagine that we came in and said we want to understand how self-harm is concentrated across those segments. Facebooks most recent position, according to a governmental source we talked to, was that it said, “We don’t track self-harm content. We don’t know who is overexposed. If it was forced to publish what integrity systems exist, we could say, “Wait, why don’t you have a self-harm classifier? You need to have one so we can answer this question of whether the self-harm content is focused on 5% of the population”. We can answer that question if we have the data.

Lord Clement-Jones: And we should wrap that into a risk assessment that we require to be delivered to us, basically?

Frances Haugen: Yes. If I were writing standards on risk assessments, a mandatory provision I would put in is that you need to do segmented analysis. The median experience on Facebook is a pretty good experience. The real danger is that 20% of the population has a horrible experience, or an experience that is dangerous.

Lord Clement-Jones: Is that the core of what we would need by way of information from Facebook, or other platforms, or is there other information about data? What else do we need to be really effective in risk assessment?

Frances Haugen: I think there is an opportunity. Imagine that for each of those integrity systems Facebook had to show you a sampling of content at different scores. A problem that I am really concerned about is that in many languages Facebook has trouble differentiating extreme terrorism content and counterterrorism content. Think about the role of counterterrorism content in society. It is how to help people make society safer. Because Facebooks AI does not work very well, for the language in question, which I believe was an Arabic dialect, 76% of the counterterrorism content was getting labelled as terrorism.

If Facebook had to disclose content at different scores, we could check and say, “Interesting. This is where your systems are weak”, and for which languages, because each language performs differently. There is a real importance, in that if there were a firehose for Facebook, and Facebook had to disclose what the scoring parameters were, I guarantee that researchers would develop techniques for understanding the roles of those scores and amplifying which kinds of content.

John Nicolson: Thank you so much for joining us. You might be interested to know that you are trending on Twitter right now, so people are listening.

Frances Haugen: Fantastic.

John Nicolson: I thought the most chilling sentence you have come out with so far this afternoon, and I wrote it down, was: Anger and hate is the easiest way to grow on Facebook. That is shocking. What a horrendous insight into contemporary society on social media that that should be the case.

Frances Haugen: One report from Facebook demonstrates how there are different kinds of feedback cycles that are all playing in concert. It said that when you look at the hostility of a common thread, if you look at a single publisher at a time and take all the content on Facebook, and look at the average hostility of that common thread, you will see that the more hostile the common thread, the more likely a click will go out to that publisher.

Anger incites traffic hours, which means profit. We also see that people who want to grow really fast have a technique of harvesting viral content from other groups and spreading it into their own pages and to their groups. They are biased towards stuff that gets an emotional reaction, and the easiest emotional reaction is anger. Psychological research has shown this for decades.

Q166       John Nicolson: Those of us who are adults, or aspiring adults like members of the committee, will find that hard enough to deal with, but for children it is particularly challenging, is it not? I would like to follow up on some of Baroness Kidron’s very good questions specifically on harm to children. For people who do not know, what percentage of British teenagers, of those who feel like this, can trace their desire to kill themselves—I cannot even believe I am saying that sentenceback to Instagram?

Frances Haugen: I do not remember the exact statistic. I think it was around 12% or 13%.

John Nicolson: It is exactly that. Body image is also made much worse, is it not?

Frances Haugen: Yes.

John Nicolson:  For people who do not understand that, why should it be that being on Instagram makes you feel bad about the way your body looks?

Frances Haugen: Facebooks own reports say that it is not just that Instagram is dangerous for teenagers; it is actually more dangerous than other forms of social media.

John Nicolson: Why?

Frances Haugen: TikTok is about doing fun activities with your friends; it is not for comments. Snapchat is about faces and augmented reality. Reddit is at least vaguely about ideas. Instagram is about social comparison and about bodies. It is about peoples lifestyles. That is what ends up being worse for kids.

There is also an effect which is that a number of things are different about a life mediated by Instagram from what high school used to be like. When I was in high school, it did not matter if your experience at high school was horrible, most kids had good homes to go home to, and at the end of the day they could disconnect. They would get a break for 16 hours. Facebooks own research says that now the bullying follows children home. It goes into their bedrooms. The last thing they see at night is someone being cruel to them. The first thing they see in the morning is a hateful statement. And that is just so much worse.

John Nicolson: So they do not get a moments peace.

Frances Haugen: They do not get a moments peace.

John Nicolson: If you are being bullied, you are being bullied all the time.

Frances Haugen: Yes.

John Nicolson: You have already told the Senate, and you have told us, what Facebook could do to address some of these issues, but some of your answers were quite complicated.

Frances Haugen: Sorry.

John Nicolson: Perhaps you could tell us in a really simple way that anybody can get what Facebook could do to address those issueschildren who want to kill themselves, children who are being bullied, children who are obsessed with their body image in an unhealthy way, and all the other issues you have addressed. What can Facebook do now without difficulty to solve those issues?

Frances Haugen: There are a number of factors that interplay and drive those issues. On a most basic level, children do not have as good self-regulation as adults do. That is why they are not allowed to buy cigarettes. When kids describe their usage of Instagram, Facebooks own research describes it as an addicts narrative. The kids say, “This makes me unhappy. I feel like I dont have the ability to control my usage of it and I feel that if I left Id be ostracised. I am deeply worried that it may not be possible to make Instagram safe for a 14 year-old, and I sincerely doubt that it is possible to make it safe for a 10 year-old.

John Nicolson: So they should not be on it.

Frances Haugen: I would love to see a proposal from an established independent agency that had a picture of what a safe version of Instagram for a 14 year-old looks like.

John Nicolson: You do not think such a thing exists.

Frances Haugen: I am not aware of something like that.

John Nicolson: Does Facebook care whether or not Instagram is safe for a 10 year-old?

Frances Haugen: What I find very deeply misleading about Facebooks statements regarding children is that it says things like, “We need Instagram Kids, because kids are going to lie about their age, so we might as well have a safe thing for them. Facebook should have to publish what it does to detect 13 year-olds on the platform, because I guarantee what it is doing today is not enough. Facebooks own research shows that Facebook can guess how old you are with a great deal of precision, because it can look at who your friends are and who you hang out  with.

John Nicolson: You are at school. That is a bit of a giveaway. If you are wearing a school uniform, the chances are that you are under 20.

Frances Haugen: I want to disclose a very specific thing. The Senate found this when we disclosed the documents to it. It found that Facebook had estimated the ages of teenagers and worked backwards to figure out how many kids lied about their ages and how many were on the platform. It found that, for some cohorts, 10% to 15% of 10 year-olds were on the platform. Facebook should have to publish those stats every year so that we can grade how good it is at keeping kids off the platform.

John Nicolson: So Facebook can resolve this if it wants to do so.

Frances Haugen: Facebook could make a huge dent in this if it wanted to. It does not, because it knows that young users are the future of the platform and that the earlier it gets them, the more likely it will get them hooked.

John Nicolson: Obviously, young users are no different from the rest of us. They are also getting to see all the disinformation about Covid and everything else that the rest of us are getting to see. Just remind us what percentage of disinformation is being taken down by Facebook.

Frances Haugen: I do not know that stat off the top of my head.

John Nicolson: From what I understand, I believe it is 3% to 5%.

Frances Haugen: That is for hate speech, but I am sure it is approximately the same.

John Nicolson: I would guess so.

Frances Haugen: It is probably even less from the perspective that the only information that is false on Facebook is information that has been verified by a third-party fact-checking system. That can only catch viral misinformation; that is misinformation that goes to half a million or a million people. I do not believe that there is anywhere near as much third-party fact-checking coverage for the UK compared with the United States.

John Nicolson: The wonders of texting tell me that the figure is 10% to 20% for disinformation, so I stand corrected. It is 10% to 20% for disinformation and 3% to 5% for hate speech. A vast amount of disinformation and hate speech is getting through to children, which must present children with a very peculiar and jaundiced sense of the world. We have absolutely no idea, do we, how those children are going to grow up, change and develop and mature, having lived in this very poisonous society at a very delicate stage in their development?

Frances Haugen: I am extremely worried about the developmental impacts of Instagram on children. Beyond the fact that if you get an eating disorder when you are 16 you may have osteoporosis for the rest of your lifethere will be women walking on this earth in 60 years’ time with brittle bones because of choices Facebook made nowthe secondary thing I am super-scared about is that kids are learning that people they care about treat them cruelly. Kids on Instagram, when they are moved with the feedback of watching someone cry or watching someone wince, are much more hateful and much meaner to people, even their friends. Imagine what domestic relationships will be like for those kids when they are 30 if they learn that people who care about them are mean to them.

Q167       John Nicolson: It is a very disturbing thought. The other very disturbing thing that you have told us about, which I think most people have not focused on, is the idea that language matters. We think Facebook is bad now, but what we tend not to realise in our very Anglocentric culture is that all the other languages around the world are getting no moderation of any kind at all.

Frances Haugen: I think the thing that should scare you even more living in the UK is that the UK is a diverse society, and those languages are not happening abstractly in Africa but in a raw dangerous version of Facebooka version of Facebook that Mark has said himself is dangerous. Engagement-based ranking is dangerous without AI. That is what Mark Zuckerberg said. Those people are also living in the UK and being fed misinformation that is dangerous and that radicalises people. Language-based coverage is not just a good for individuals thing; it is a national security issue.

John Nicolson: That is interesting. On the social front, you pointed out that there might be differences between the United Kingdom and the United States, which it is not picking up. I have said this to the committee before. I have personal experience of this in Twitter where as a gay man I was called a greasy bender on Twitter. I reported it to Twitter, and Twitter wrote back and told me there was nothing wrong with being called a greasy bender. I wrote back giving exact chapter and verse from their community standards that showed it was unacceptable, and somebody wrote back to me, presumably from California, telling me that it was absolutely acceptable. To be generous, it may just be that they did not know what a bender was because the word is not in use in the United States, but, honestly, I think I would have googled if I had been them just to find out why this MP was being so persistent about this particular word.

In a nutshell, what do you want us to do in this committee? What is the most useful thing in addressing the concerns that you have raised here?

Frances Haugen: I want to be clear: bad actors have already tested Facebook. They have tried to hit the rate limits. They have tried experiments with content. They know Facebooks limitations. The only ones who do not know Facebooks limitations are good actors. Facebook needs to disclose what its integrity systems are and which languages it works in, and the performance per language or per dialect, because I guarantee you that safety systems designed for English probably do not work as well on UK English versus American English.

John Nicolson: All this makes Facebook sound relatively benign, as if it is just not doing quite what it should be doing, but what your evidence has shown to us is that Facebook is failing to prevent harm to children, it is failing to prevent the spread of disinformation, and it is failing to prevent hate speak. It has the power to deal with those issues and it is just choosing not to. It makes me wonder whether Facebook is fundamentally evil. Is Facebook evil?

Frances Haugen: I cannot see into the hearts of men. I think there is a real thing about good people, and Facebook is overwhelmingly full of conscientious, kind, empathetic people

John Nicolson: Who have to leave.

Frances Haugen: They are good people who are embedded in systems where bad incentives have led to bad actions. There is a real pattern of people who are willing to look the other way being promoted more than people who raise alarms.

John Nicolson: We know where that leads in history, do we not?

Frances Haugen: Yes.

John Nicolson: Could we compromise and say that it is not evil—maybe that is an overly moralistic wordbut that some of the outcomes of Facebooks behaviour are evil?

Frances Haugen: I think it is negligent.

John Nicolson: Malevolent.

Frances Haugen: Malevolent implies intent, and I cannot see into the hearts of men. I believe there is a pattern of inadequacy. Facebook is unwilling to acknowledge its own power. It believes in a world of flatness, which hides the difference, for example, that children are not adults. It believes in flatness and it will not accept the consequences of its actions. I think that is negligence and ignorance, but I cannot see into their hearts so I do not want to consider ill of them.

John Nicolson: I respect your desire, obviously, to answer the question in your own way, but from the evidence that you have given us, a reasonable person running Facebook seeing the consequences of the companys behaviour, would, I imagine, have to conclude that what it was doingthe way the company was performing and the outcomes—was malevolent, and would want to do something about it.

Frances Haugen: I sincerely hope so.

John Nicolson: Back to you, Chair.

Frances Haugen: Do you mind if I rest my voice for five minutes? Could we take a break for a second? Sorry, I do not know how long we are going to go and whether we are going for two and a half hours. Never mind. Ask your question.

The Chair: On the point about intent, someone may not have intended to do a bad thing, but if their actions are causing that and they are told about it and they do not change their strategy, what do you say about them then?

Frances Haugen: I am a big proponent of looking at systems and how they perform. Actually, this is a huge problem inside Facebook. Facebook has the philosophy that if it establishes the right metrics, it can allow people free rein. As I said, it is intoxicated by flatness. Facebook has the largest open floorplan office in the world; it is a quarter of a mile long in one room. It believes in flatness. It believes that if you pick a metric, you can let people do whatever they want to move that metric, and that is all you have to do, and if you had better metrics, you could do better actions.

That ignores the fact that if you learn data that that metric is leading to harm, which is what meaningful social interactions did, the metric can get embedded, because now there are thousands of people who are all trying to move a metric, and people get scared to change a metric and make people not get their bonuses.

I think it is a real thing that there is no will at the top. Mark Zuckerberg has unilateral control over 3 billion people. There is no will at the top to make sure that these systems are run in an adequately safe way. Until we bring in a counterweight, I think things will be operated for the shareholders interests and not for the public interest.

The Chair: Thank you. Joining us remotely, Dean Russell.

Q168       Dean Russell: Thank you again, Ms Haugen, for joining us today. It is incredibly important and your testimony has been heard loud and clear. I want to pick up a point about addiction. If Facebook was optimising algorithms in the same way, or was viewed to be doing it in the same way as a drug company was trying to improve addiction of its product, it would probably be viewed very differently.

I wonder if you could explore a bit further this role of addiction, and whether Facebook is doing something we have perhaps never seen in history before, which is creating an addictive product that is not consumed through taking a drug, as it were, but via a screen.

Frances Haugen: Inside Facebook, there are many euphemisms that are meant to hide the emotional impact of things. For example, the ethnic violence team is called the “social cohesion team”, because ethnic violence is what happens when social cohesion breaks down. For addiction, the metaphor is “problematic use”. People are not addicted; they have problematic use.

The reality is that using large-scale studies—100,000 people—Facebook has found that problematic use is much worse in young people than people who are older. The bar for problematic use is that you have to be self-aware enough and honest enough with yourself to admit that you do not have control of your usage and it is harming your physical health, your schooling or your employment.

For 14 year-olds, it peaks. In their first year, they have not had quite enough problematic use yet, but, by the time they get to be 14, between 5.8% and 8% of kids say they have problematic use, and that is a huge problem. If that many 14 year-olds are that self-aware and that honest, the real number is probably 15% or 20%.

I am deeply concerned about Facebook’s role in hurting the most vulnerable among us. Facebook has studied who has been most exposed to misinformation, and it is people who have been recently widowed, people who were recently divorced and people who moved to a new city—people who are socially isolated. I am deeply concerned that it has made a product that can lead people away from their real communities and isolate them in rabbit holes and filter bubbles.

What you find is that when people are sent targeted misinformation to a community, it can make it hard to reintegrate into larger society because they do not have shared facts. That is the real harm. I like to talk about the idea of the misinformation burden, because it is a burden when we encounter this kind of information. Facebook, right now, does not have any disincentive for trying to do high-quality, shorter sessions. Imagine if there was a sin tax that was a penny an hour. That is dollars a year per user for Facebook. Imagine if there was a sin tax that pushed Facebook to have shorter sessions that were higher quality. Nothing today is incentivising it to do that. All the incentives say that if you can get them to stay on the platform longer you will get more ad revenue; you will make more money.

Q169       Dean Russell: Thank you. That is very helpful. The discussion about the Bill that we are looking at—the Online Safety Bill—is often about the comparison with being a publisher or publishing platform, but should we be looking at it much more almost as a product approach, which in essence is causing addiction, as you say, with young people, and, as you mentioned earlier, about the impact on trying to get a greater high almost through the dopamine in their brains?

We have heard previous testimony from experts highlighting that children’s brains seem to be changing because they are using Facebook and other platforms to a large extent over many hours. If they were being given a white powder and they were having the same symptoms and the same outlets, we would be very quick to clamp down on that, but because it is via a screen and we call it Facebook and we think everyone is using it nicely, that does not happen.

I should be interested in your view on the impact on children. Has Facebook been looking at that? Should we be doing this with regards to Facebook being a product rather than a platform?

Frances Haugen: I find it really telling that if you go to Silicon Valley and you look at the most elite private schools, they often have zero social media policies. They try to establish cultures where you do not use phones and you do not connect with each other on social media. The fact that that is a trend in elite private schools in Silicon Valley should be a warning to us all.

It is super-scary to me that we are not taking a safety-first perspective with regard to children. Safety by design is so essential with kids, because the burden that we have set up to now is the idea that the public has to prove to Facebook that Facebook is dangerous. Facebook has never had to prove that its product is safe for children. We need to flip that script. With pharmaceuticals, we said a long time ago that it is not the obligation of the public to say that a medicine is dangerous; it is the obligation of the producer to say that the medicine is safe. We have done that over and over again. This is the right moment to act. This is the moment to change that relationship with the public.

Dean Russell: With regard to that point about addiction, are you aware of any studies in Facebook and the documents you have seen where it has looked at how it can increase addiction by the algorithms?

Frances Haugen: I have not seen any documents that are as explicit as saying that Facebook is trying to make addiction worse, but I have seen documents where, on one side, someone is saying that the number of sessions per day that someone has—the number of times they visit Facebook—is indicative of their risk of exhibiting problematic use.

On the other side, they are clearly not talking to each other. Someone says, “Interesting. An indicator that people will still be on the platform in three months is if they have more sessions every day. We should figure out how to drive more sessions”. That is an example where, because its management style is flat, there is not enough cross-promotion and cross-filtration, and the side that is responsible for growing the company is often kept away from the side of it that highlights harms. That kind of world where it is not integrated causes dangers, and it makes the problem worse.

The Chair: I am sorry to interrupt, but we will pause there. It is 25 to three, and we have been going for over an hour, so we will take a 10-minute break at this point. Thank you.

Frances Haugen: That would be lovely. Thank you so much.

The committee suspended for 10 minutes.

The Chair: Thank you. The evidence session will now resume. I would like to ask Dean Russell to continue with his questions.

Q170       Dean Russell: Thank you, Chair. Thank you again, Ms Haugen, for your responses earlier. I have a few more questions, but hopefully it will not take too long. One of them continues on the addictivity—if there is such a word—of Facebook and similar platforms. You mentioned before that you had not seen any specific research in that area. Is there any awareness within Facebook of the actual effect of long use of Facebook and similar platforms on children’s brains as they are developing?

Frances Haugen: There is an important question to be asked: what is the incremental value added to a child after some number of hours of usage per day? I am not a child psychologist. I am not a neurologist. I cannot advise on what that time limit should be, but we should weigh a trade-off. It is possible to say that there is value that is given from Instagram, but there is a real question of how valuable the second hour is after the first hour and how valuable the third hour is after the second hour, because the impacts are probably more than cumulative. They probably expand substantially more over time. Those are great questions to ask, but I do not have a good answer for you.

Dean Russell: Thank you. Finally on that point, before I move on to a small extra point, do you think, from your experience, that the senior leadership at Facebook, including Mark Zuckerberg, actually care if they are doing harm to the next generation of society, especially children?

Frances Haugen: I cannot see into the hearts of men, so I do not know what their position is. I know that there is a philosophy inside the company that I have seen repeated over and over again, which is that people focus on the good and there is a culture of positivity. That is not always a bad thing. The problem is that when it is so intense that it discourages people from looking at hard questions, it becomes dangerous. It has not adequately invested in security and safety. When it sees a conflict of interest between profits and people, it keeps choosing profits.

Dean Russell: You would agree that the fact that it has not investigated or even done research into this area is a sign that it perhaps does not care.

Frances Haugen: It needs to do more research. It needs to take more action. It needs to accept that it is not free, that safety is important and is a common good, and it needs to invest more in it.

Q171       Dean Russell: Thank you. On a slightly different point, if I may, you are obviously now a globally-known whistleblower. One of the aspects that we have looked at over the past few weeks is anonymity. One of the regular points that is made is that if we pushed on anonymity in this Bill, it would do harm to people who want to be whistleblowers in the future. I want to get your sense of whether you agree with that and whether you have any particular view on anonymity.

Frances Haugen: I worked on Google+ in the early days. I was the person in charge of profiles on Google+ when Google, internally, had a small crisis over whether or not real names should be mandated. There was a movement inside the company called Real Names Considered Harmful, and it detailed at great length all the different populations that are harmed by excluding anonymity, groups like domestic abuse survivors whose personal safety may be at risk if they are forced to engage with their real name.

It is important to weigh the incremental value of requiring real names. Real names are difficult to implement. Most countries in the world do not have digital services where we could verify someone’s ID versus their picture on a database. In a world where someone can use a VPN and claim that they are in one of those countries and register a profile, that means that they could still do whatever action you are afraid of them doing today.

Secondly, Facebook knows so much about you. If it is not giving you information to facilitate investigations, that is a different question. Facebook knows a huge amount about you today. The idea that you are anonymous on Facebook is not accurate for what is happening, and we still see the harms.

Thirdly, the real problem is the systems of amplification. This is not a problem about individuals; it is about having a system that prioritises and mass distributes divisive, polarising, extreme content. In situations where you just show more content from your family and friends, you get for free safer, less dangerous content. That is the greater solution.

Dean Russell: Very finally, on anonymity for this report, are you saying that we should be focusing more on the proliferation of content to large numbers than on the anonymity of the source of the content?

Frances Haugen: Yes. The much more scalable effective solution is thinking about how content is distributed on these platforms, what the biases of the algorithms are, what they are distributing more of, and concentration. Are certain people being pounded with bad content? It happens on both sides; there are both people being hyper-exposed to toxicity and hyper-exposed to abuse.

Dean Russell: Thank you.

Q172       The Chair: Thank you. I want to ask about the point you just made on anonymity. From what you are saying, it sounds like anonymity currently exists to hide the identity of the abuser from their victim but not the identity of the abuser to the platform.

Frances Haugen: Platforms have far more information about accounts than I think people are aware of. Platforms could be more helpful in identifying those connections in cases of crimes. It is a question of Facebook’s willingness to act to protect people more than a question of whether those people are anonymous on Facebook.

The Chair: It is a particularly pertinent point that we are having this debate on anonymity. One of the concerns is that, if you say the platform should always know who the account user is so that if there was a request from law enforcement it could comply with it, some people would say that if we do that there is a danger of its systems being hacked or that information being got at in another way.

From what you are saying, practically the company already has that data and information anyway. It knows so much about each one of its users regardless of the settings for an account. Obviously, on Facebook, you have to use your own name for the account in theory. In practical terms, anonymity does not really exist because the companies know so much about you.

Frances Haugen: You could imagine designing Facebook in a way where, as you use the platform more, you have more reach; the idea that reach is earned is not a right. In that world, as you interact with a platform more, the platform will learn more and more about you. The fact that, today, you can make a throwaway account and take an action opens up all sorts of doors. I want to be clear: in a world where you require people’s ID, you will still have that problem, because Facebook will never be able to mandate that for the whole world. Lots of countries do not have those systems, and as long as you can pretend to be in that country and register an account you are still going to see those harms.

The Chair: Thank you.

Q173       Lord Black of Brentwood: I join other colleagues in thanking you so much for being here today, Ms Haugen. This is so important to us.

The Bill, as it stands, exempts legitimate news publishers, and the content that comes from legitimate news publishers, from its scope, but there is no obligation on Facebook, and indeed the other platforms, to carry that journalism. Instead, it will be up to them to apply the codes that are laid down by the regulator, directed by the Government in the form of the Secretary of State, ostensibly to make their own judgments about whether or not to carry it. It will be AI that is doing that, it will be the black box that is doing that, which leads to the possibility, in effect, of censorship by algorithm.

In your experience, do you trust AI to make those sorts of judgments, or will we get to the sort of situation where all legitimate news about terrorism is, in effect, censored out because the black box cannot differentiate news about terrorism and content that is promoting terrorism?

Frances Haugen: There are a couple of different issues to unpack. The first question is about excluding journalism. Right now, my understanding of how the Bill is written is that a blogger could be treated the same as an established outlet that has editorial standards. People have shown over and over again that they want high-quality news. People are willing to pay for high-quality news. It is interesting that one of the highest rates of subscription news is among 18 year-olds. Young people understand the value of high-quality news. When we treat a random blogger and an established, high-quality news source the same, we dilute the access of people to high-quality news. That is the first issue. I am very concerned that if you exempt it across the board, you are going to make the regulations ineffective.

The second question is whether AI can identify safe versus dangerous content. Part of why we need to be forcing Facebook to publish which integrity systems exist in which languages and performance data is that, right now, those systems do not work. Facebook’s own documents say it has trouble differentiating content promoting terrorism and counterterrorism speech at a huge rate. The number I saw was that 76% of counterterrorism speech in an at-risk country was getting flagged as terrorism and taken down.

Any system where the solution is AI is a system that is going to fail. Instead, we need to focus on slowing the platform down, making it human scale and letting humans choose what we focus on, and not letting an AI, which is going to be misleading us, make that decision.

Lord Black of Brentwood: What, practically, could we do in this Bill to deal with that problem?

Frances Haugen: Great question. Mandatory risk assessments with standards like how good a risk assessment needs to be and analysis of things like segmentation—understanding whether some people are hyper-exposed—are critical. The most important part is having a process where it is not just Facebook articulating harms; it is also the regulator going out and collecting harms from other populations and turning back to Facebook and saying, “You need to articulate how you are going to solve these problems”, because, right now, the incentives for Facebook are aligned with its shareholders. The point of regulation is to pull that centre of mass back towards the public good.

Right now, Facebook does not have to solve those problems. It does not have to disclose that they exist, and it does not have to come up with solutions. But in a world where it was regulated and mandated—“You have to tell us what the five-point plan is on each of those things, and if it’s not good enough we’re going to come back to you and ask you again”—is a world where, instead of investing in 10,000 engineers to make the metaverse, Facebook would have an incentive to have 10,000 engineers to make us safer, and that is the world we need.

Lord Black of Brentwood: So we need to give the power to the regulator to be able to do that, because at the moment the Bill, as I understand it, does not.

Frances Haugen: I believe that, if Facebook does not have standards for those risk assessments, it will give you a bad risk assessment, because Facebook has established over and over again that when asked for information it misleads the public. I do not have any expectation that it will give you a good risk assessment unless you articulate what a good one looks like. You have to be able to mandate that it gives solutions because, on a lot of these problems, Facebook has not thought very hard about how to solve them or because there is no incentive forcing it away from shareholder interests, and when it has to make sacrifices like 1% growth here or 1% growth there, it chooses growth over safety.

Q174       Lord Black of Brentwood: Leading on from that, I have a very quick general question. As things stand at the moment, do you think that this Bill is keeping Mark Zuckerberg awake at night?

Frances Haugen: I am incredibly excited and proud of the UK for taking such a world-leading stance with regard to thinking about regulating social platforms. The global south countries currently do not have the resources to stand up and save their own lives; they are excluded from these discussions. The UK has a tradition of leading policy in ways that are followed around the world. I cannot imagine that Mark is not paying attention to what you are doing, because this is a critical moment for the UK to stand up and make sure that the platforms are in the public good and are designed for safety.

Lord Black of Brentwood: We probably need to do a little more in the Bill to make sure that that is the case. That is what you are saying.

Frances Haugen: I have faith in you guys.

Lord Black of Brentwood: Thank you very much.

Q175       The Chair: You present a very compelling argument on the way regulation should work. Do you not think it is disingenuous of companies like Facebook to say, “We welcome regulation. We actively want parliaments around the world to regulate”? Nick Clegg was critical of Congress for not regulating, yet the company does none of the things that you said it should and does not share any of that information with the oversight board that it created, in theory, to have oversight of what it does.

Frances Haugen: It is important to understand that companies work within the incentives and the context they are given. Today, Facebook is scared that if it freely disclosed information—if it was not requested by a regulator—it might get a shareholder lawsuit. It is really scared about doing the right thing, because in the United States it is a private company, and it has a fiduciary duty to maximise shareholder value. When it is given little choices between 5% more misinformation, or 10% more misinformation, and 1% of sessions, it chooses sessions and growth over and over again.

There is an opportunity to make the lives of Facebook rank and file employees better by giving appropriate goalposts for what a safe place is. Right now, I think there are a lot of people inside the company who are uncomfortable about the decisions that they are being forced to make within the incentives that exist. Creating different incentives through regulation gives more freedom to be able to do things that they might be aligned with in their hearts.

The Chair: Going back to the oversight board, it does not have access to data.

Frances Haugen: No.

The Chair: So much of your argument is that data drives engagement, which drives up revenue, and that is what the business is all about. The oversight board cannot see that. Even when it is asked for it, it is told that it cannot have it. To me, that does not look like a company that wants to be regulated.

Frances Haugen: Like I said before, I cannot see into men’s hearts and I cannot see into motivations. I am an odd nerd in that I have an MBA. Given the laws in the United States, it has to act in the shareholders’ interests, or it has to be able to justify doing something else. A lot of the long-term benefits are harder to prove. If you make Facebook safer and more pleasant, it will be a more profitable company 10 years from now, because the toxic version of Facebook is slowly losing users. At the same time, the actions in the short term are easy to prove, and I think it worries that if it does the right thing it might get a shareholder lawsuit. I just do not know.

Q176       Suzanne Webb: Thank you so much, Frances, for being here. It is truly appreciated, as is everything you have done to get yourself here over the last year.

What will it take for Mark Zuckerberg and the Facebook executives to be accountable? Do you think they are aware of the human cost that has been expelled? I feel that they are not accountable enough, and there has been a human price.

Frances Haugen: It is very easy for humans to focus on the positive over the negative. It is important to remember that Facebook is a product that was built by Harvard students for other Harvard students. When a Facebook employee looks at their news feed, it is likely that they see a safe, pleasant place where pleasant people discuss things together. Their immediate visceral perception of what the product is and what is happening in a place like Ethiopia are completely foreign worlds.

There is a real challenge of incentives and I do not know if all the information that is really necessary gets very high up in the company—where the good news trickles up, but not necessarily the bad news. It is a thing where executives see all the good they are generating, and then they can write off the bad as the cost of all that good.

Suzanne Webb: I am guessing that now, probably having watched from afar what has been going on here in Westminster, they are probably very much aware of what has been going on. I truly hope that they are, bearing in mind all the evidence sessions that we have had, the people coming here with stories that are quite unbelievable, and the loss of life. To your knowledge, has it ever been admitted, internally or privately, that they got it wrong?

Frances Haugen: There are many employees internally. The key thing that you will see over and over again in their reporting on these issues is that countless employees said, “We have lots of solutions. We have lots of solutions that do not involve picking good and bad ideas. It is not about censorship. It is about the design of the platform. It is about how fast it is and how growth optimised it is. We could have a safer platform, and it could work for everyone in the world, but it will cost little bits of growth”. There is a real problem that those voices do not get amplified internally because they are making the company grow a little slower, and it is a company that lionises growth.

Q177       Suzanne Webb: What is your view on criminal sanctions for online harm content? Do you believe there is a route for criminal sanctions?

Frances Haugen: My philosophy on criminal sanctions for executives is that they act like gasoline on the law. Whatever the terms are in the conditions of a law, if you have really strong confidence that you have picked the right thing, they will amplify those consequences. But the same can be true if there are flaws in the law. It is hard for me to articulate, given where the law stands today, whether or not I would support criminal sanctions. It is a real thing in that it makes executives take consequences more seriously. It depends on where the law ends, in the end.

Suzanne Webb: Thank you. You mentioned earlier that it is easier to promote hate and anger. I know you touched on that earlier and had that conversation. Quick question: is the promotion of hate and anger by accident or by design?

Frances Haugen: Facebook has repeatedly said, “We have not set out to design a system that promotes anger and divisive, hateful content”. It said, “We never did that. We never set out to do that”. But there is a huge difference between what it set out to do, which was to prioritise content based on its likelihood to elicit engagement, and the consequences of that. I do not think it set out to accomplish those things, but it has been negligent in not responding to the data as it is produced. There is a large number of data scientists internally who have been raising these issues for years.

The solution that Facebook has implemented, in countries where it has civic classifiers, which is not many countries and languages in the world, is that it is removing some of the most dangerous terms from engagement-based ranking. That ignores the fact that the most vulnerable, fragile places in the world are linguistically diverse. Ethiopia has 100 million people, and they speak six languages. Facebook only supports two, and it has only a few integrity systems. If we believe in linguistic diversity, the current design of the platform is dangerous.

Suzanne Webb: Online harm has been out there for some time. We are all aware of it. It is very much in the public domain, as I touched on briefly before. Why are the tech companies not doing anything about it? Why are they having to wait for this Bill to come through to make the most obvious changes to what is basically proliferating online harm? As I said, there is human loss to this. Why are they not doing something now about it?

Frances Haugen: As we look at the harms of Facebook, we need to think about these things as system problems. It is the idea that these systems are designed products—these are intentional choices—and it is often difficult to see the forest for the trees. Facebook is a system of incentives. It is full of good, kind, conscientious people who are working with bad incentives. There is a lack of incentives inside the company to raise issues about flaws in the system, and there are lots of rewards for amplifying and making things grow more.

The big challenge of Facebook’s management philosophy is that it can just pick good metrics and let people run free, so they have found themselves in a trap. In a world like that, how do you propose changing the metric? It is very hard, because 1,000 people might have directed their labour for six months trying to move that metric, and changing the metric will disrupt all that work. I do not think any of it was intentional. I do not think they set out to go down this path, but they are kind of trapped in it. That is why we need regulation—mandatory actions—to help pull them away from the spiral they are caught in.

Suzanne Webb: Thank you.

Q178       Debbie Abrahams: I reiterate all my colleagues’ thanks to you for coming over and giving evidence to us.

In an interview you gave recently, you said, “Facebook consistently resolved conflicts in favour of its own profits”. You have speckled the testimony that you have given so far with those. Can you pick two or three that you think really highlight the point?

Frances Haugen: Overall, its strategy of engagement-based ranking is safe once you have AI. That is the flagship for showing how Facebook has non-content-based tools that it could use to keep the platform safe. For example, limiting reshare chains—as I said, two hops—will carve off 1% of growth. Requiring someone to click on a link before you reshare it is something Twitter has done. Twitter accepted the cost of that change, but Facebook was not willing to.

There are lots of things around language coverage. Facebook could be doing much more rigorous safety systems for the languages that it supports, and it could be doing a better job of saying, “We have already identified what we believe are the most at-risk countries in the world, but we’re not giving them equal treatment. We’re not even out of the risk zone with them”. That pattern of behaviour of being unwilling to invest in safety is the problem.

Debbie Abrahams: Looking specifically at the events in Washington on 6 January, there has been a lot of talk about Facebook’s involvement in that. At the moment, that evidence is being looked at in terms of its positions. Would that be an example? Would somebody have highlighted it as a particular concern and taken it to the executives? I am absolutely horrified by what you say about the lack of risk assessment and risk management in the organisation. It is a gross dereliction of responsibility. Would that have been one example of where Facebook was aware of the potential harm that it could create—that was created—but chose not to do anything about it?

Frances Haugen: What is particularly problematic to me is that Facebook looked at its own product before the US 2020 election and identified a large number of settings—things as subtle as whether to amplify live videos 600 times or 60 times, because they want live video on the top of your feed. It said that that setting is great for promoting live video, for making that product grow and having impact with that product, but it is dangerous, because on 6 January it was used for co-ordinating the rioters. Facebook looked at those risks along maybe 20 interventions and said, “We need to have these in place for the election”.

Facebook said that it turned them off, because censorship is a delicate issue. This is so misleading. Most of those interventions had nothing to do with content. For example, in promoting live video 600 times versus 60 times, they have questions like, “Have you censored someone? I don’t think so. Facebook has characterised that as turning those off because it does not believe in censorship. On the day of 6 January, most of those interventions were still off at 5 pm eastern time. That is shocking, because it could have turned them on seven days before. Either it is not paying enough attention for the amount of power it has or it is not responsive when it sees those things. I do not know what the root cause is. All I know is that it is an unacceptable way to treat a system that is as powerful and as delicate as that.

Debbie Abrahams: Your former colleague, Sophie Zhang, gave evidence to the committee last week, and she made the point that we have freedom of expression and freedom of information, but we do not have freedom of amplification. Would you agree with that in relation to censorship?

Frances Haugen: The current philosophy inside the company is almost that it refuses to acknowledge the power it has. It justifies the choices it is making based on growth. If it came in and said, “We need to do safety first. We need safety by design”, it would choose different parameters in optimising how amplification works. I want to remind people that we liked the version of Facebook that did not have algorithmic amplification. We saw our friends. We saw our families. It was more human scale. A lot of value and joy could come from returning to a Facebook like that.

Debbie Abrahams: You made the very important point that it is a private company and has a fiduciary responsibility to its shareholders and so on. Do you think there are breaches in its terms and conditions? Is there a conflict?

Frances Haugen: There are two issues. First, saying that private companies can define terms and conditions is like them creating their own homework. They are defining what is bad, and we know now that they do not even find most of the things that they say are bad. They do not have any transparency or accountability.

The second question is whether companies have duties outside those to their shareholders. We have had a principle for a long time that companies cannot subsidise their profits, and pay for their profits, using public expense. If you pollute the water and people get cancer, the public have to pay for those people. Similarly, if Facebook is sacrificing our safety because it does not want to invest enough, do not listen to it when it says, “We spend $4 billion on safety”. That is not the question. The question is: how much do you need to pay to make it safe?

Q179       Debbie Abrahams: One of the things that the committee has been looking at is a duty of care. Is that something that we should be considering very carefully to mandate?

Frances Haugen: A duty of care is really important. We have let Facebook act freely for too long. I like to say that there are multiple criteria necessary for Facebook to act completely independently. The first is that, when it sees conflicts of interest between itself and the public good, it resolves them aligned with the public good. The second is that it cannot lie to the public. Facebook, in both cases, has violated those criteria and demonstrated that it needs oversight.

Debbie Abrahams: Thank you so much. Finally, do you think the regulator will be up to the job?

Frances Haugen: I am not a lawmaker, so I do not know a ton about the design of regulatory bodies. Things like having mandatory risk assessments with a certain level of quality is a flexible enough technique that, as long as Facebook is required to articulate solutions, it might be a good enough dynamic. As long as there is also community input into that risk assessment, it might be a system that could be sustainable over time. The reality is that Facebook will keep trying to run around the edges, so we need something that can continue over time, not just play whack-a-mole on specific pieces of content or on specific instances.

Debbie Abrahams: Thank you so much, Frances.

The Chair: Thank you. Joining us remotely, Darren Jones.

Q180       Darren Jones: Thank you, Chair. Following on from the discussion, I have a few questions about how some of the provisions in this Bill might be operationalised in the day to day of Facebook.

First, there is a distinction in the Bill between illegal content such as terrorism and legal but harmful. The question about how you define what is harmful is based on the idea that a company like Facebook would reasonably foresee that something was causing harm. We have seen through some of the leaks over the last few weeks that Facebook undertakes research internally but maybe does not publish that or share the information with external researchers. If Facebook just stopped researching potential harms and claimed it had no reasonable foresight of new harms, would it be able to get around the risk assessment, in your view?

Frances Haugen: I am extremely worried about Facebook ceasing to do important research. It is a great illustration of how dangerous it is to have a company as powerful as Facebook where the only one that gets to ask questions of Facebook is Facebook. We probably need something like a postdoc programme where public interest people are embedded in the company for a couple of years and they can ask questions. They can work on real problems, they can learn about the systems, and they can go out and seed academia and train the next generation of integrity workers.

Legal but harmful content is dangerous. For example, Covid misinformation leads to people losing their lives. There are large, harmful societal consequences of that. I am also concerned that if you do not cover legal but harmful content, the Bill will have a much smaller impact. Especially on impacts to children, for example, a lot of the content we are talking about would be legal but harmful content.

Darren Jones: Thank you. I note that in your answers today you have said that there is not enough internal work to understand harms, and that data should be shared with external academics and regulators. I agree with you on those points.

Say the company found some new type of harmful content—a new trend or some type of content that was leading to physical or mental harm to individuals. We have talked today about how complex the Facebook world is, whether it is about content promotion or groups or messaging. How would you go about auditing and assessing how that harm is being shared within the Facebook environment? It sounds like a very big job to me.

Frances Haugen: One of the reasons why I am such a big advocate of having a firehose—picking some standard where if more than X thousand people see content it is not really private—is that you can include metadata about each piece of content. For example, did it come via a group? What group? Where on average in someone’s feed did this content show up?  Which groups are most exposed to it?

Imagine if we could tell that Facebook is actually distributing a lot of self-harm content to children. There is various metadata that we could be releasing. There is a really interesting opportunity that, once more data is accessible outside the company, a cottage industry will spring up among academics and independent researchers. If I had access to that data, I would start a YouTube channel and teach people about it—I can tell jokes for 15 minutes. There are opportunities where we will develop the muscle of oversight, but we will only develop it if we have at least a peephole to look into Facebook.

Darren Jones: If Facebook were to say to us, for example, “There is a unique and new type of harm that has been created, a new trend that has been appearing in private groups, and may be being shared a little bit, but because of the amount of content on the platform it is really difficult for us to find it and assess it properly”, you are saying that that is not true, and it has the capabilities to do that.

Frances Haugen: As I said earlier, it is really important for Facebook to have to publish which integrity systems exist. What content can they find? We should be able to have the public surface it and say, “Hey, we believe there is a harm here”, and then say, “Oh, interesting, you don’t actually look for that harm”. The example I heard was of self-inflicted harm content and whether some kids are being overexposed. Facebook said, “We don’t track that”. We don’t have a mechanism today to force Facebook to answer those questions. We need something that would be mandatory where we could say, “You need to be tracking this harm”.

I am sorry, I forgot your question. My apologies.

Q181       Darren Jones: You have broadly answered it. My question was whether it had the capacity to do that.

My last two questions are more about corporate governance of the business. I am interested to know, from your experience, how the different teams within the business operate. A question I have asked previously in this committee is this. The programmers who might be down the end of the corridor coding all these algorithms will know a certain amount of information. The product teams that build products will know a bit about what the programmers have done but probably not the whole thing and how it works properly. The compliance and PR teams will then just have to receive answers, probably from the product team, in order to produce the risk assessment that they then present to our regulator.

My concern is that the real truth about what is happening is with the programmers, and it may not get through unless we force it to in that audit, that submission, to our regulator. Am I wrong in those assumptions? Do those teams work together well in understanding what they all do and how it links together?

Frances Haugen: I think it is really important to know that there are conflicts of interests between those teams. One of the things that has been raised is the fact that, at Twitter, the team responsible for writing policy on what is harmful reports separately to the CEO from the team that is responsible for external relations with governmental officials. At Facebook, those two teams report to the same person. The person who is responsible for keeping politicians happy is the same person who gets to define what is harmful or not harmful content.

I think there is a real problem with the left hand not speaking to the right hand at Facebook. I gave the example earlier of, on the one hand, someone in integrity saying that one of the signs of problematic use or addiction is that you come back a lot of times a day. Then there is someone on the growth team saying, “Did you notice that, if we can get you to come back many times a day, youll still use the product in three months?”

It is as if there is a world that is too flat, where no one is really responsible. Antigone Davis in her Senate testimony, when she was pressed on Instagram, kids and various decisions, could not articulate who was responsible. That is a real challenge at Facebook. There is no system of responsibility or governance, so you end up in situations where you have one team, probably unknowingly, pushing behaviours that cause more addiction.

Darren Jones: We may need to look at forcing a type of risk committee, like an audit committee, in the business, where the relevant people are coming together at least to have a conversation.

Frances Haugen: It might be good to include in the risk assessment, “What are your organisational risk assessments, not just your product risk assessments?” The organisational choices of Facebook are introducing systemic risk.

Q182       Darren Jones: My very last question is this. This is a piece of law in the UK. It has some international reach, but it is obviously UK law. We have heard evidence that employees of different technology companies here in London will want to be helpful and give certain answers, but actually the decisions are really made in California. Do you think there is a risk that the way power is structured in California means that the UK-based team trying to comply with the law here may not be able to do what they need to do?

Frances Haugen: Facebook today is full of kind, conscientious people who work on a system of incentives that unfortunately leads to bad results and results that are harmful to society. There is definitely a centre of mass in Menlo Park. There is definitely a greater priority on moving growth metrics rather than safety metrics. You may have safety teams looking in London, or elsewhere in the world, whose actions will be greatly hindered, or even rolled back, on behalf of growth.

We were talking about the right hand and left hand. A real problem is that there are reports inside Facebook that an integrity team might spend months pushing out a fix that lowers misinformation by 10%, but because the AI is so poorly understood, people will add little factors that basically recreate whatever the term was that was fixed and reintroduce those harms. Over and over, if the incentives are bad, you will get bad behaviour.

Darren Jones: Understood, thank you. Those are my questions, Chair.

Q183       Lord Stevenson of Balmacara: I add my thanks to those of everybody else for the incredible picture that you have painted. We have mentioned that we recognise the problem you face by doing what you have done, but it is fantastic to get a real sense of what is happening inside the company. I want to pick up on what was being said just a few minutes ago. Some of your descriptions were almost like Nineteen Eighty-Four, when you talked about the names given to parts of the organisation that are supposed to be doing the opposite of what the names seem to imply.

That raises an issue. I want to ask about the culture. You ended up by saying that there were lots of people in Facebook who got what you were saying, but you also said that the promotion structure possibly pointed in a different direction and, therefore, these people did not necessarily get to the positions of power that you might expect. Organisations have a culture of their own.

In a sense, my question is about culture. Do you think there is the possibility that, with a regulatory structure of the type we are talking about being seen as a way forward in the way the world deals with these huge companies that we have never had to deal with before, there are sufficient people of good heart and good sense in the company to rescue it, or do you feel that somehow the duality that you are talking about—the left hand and the right hand—is so bad that it will never, ever recover itself and it has to be done by an external agency?

It is a very complicated question, but it really comes back to culture. Do you think there is a way in which it might happen?

Frances Haugen: Until the incentives that Facebook operates under change, we will not see changes from Facebook. Facebook is full of kind, conscientious and good people, but the systems reward growth. The Wall Street Journal has reported on how people have advanced inside the company. Disproportionately, the people who are the managers and the leaders of the integrity teams—the safety teams—come originally from the growth groups. The path to management in integrity and safety is via growth, and that seems to be very problematic.

Lord Stevenson of Balmacara: It is a bit doomed.

Frances Haugen: There is a need to provide external weight and a pull to move it away from just being optimised on short-termism and immediate shareholder profitability, and more towards the public good. I think that will lead to a more profitable and successful company 10 years down the road.

Lord Stevenson of Balmacara: You may not be able to answer this, in the sense that it may not be an easy question to answer. Inside the company, the things that you have been saying are being said at the water fountain and in the corridors. I think you said that people talk about these things. What stops it getting picked up as official policy? Is there actually a gatekeeper on behalf of the growth group who simply says, “You’ve had enough time talking about that. Move on”?

Frances Haugen: I do not think there is an explicit gatekeeper. It is not that there are things we cannot say, but I think there is a real bias. Experiments are taken to review, and the costs and the benefits are assessed. Facebook has characterised some of the things that I have talked about in the sense of, “We are against censorship”. The things that I am talking about are not content based. I am not here to advocate more censorship. I am saying: how do we make the platform more human scale? How do we move back things like engagement-based ranking? It is finding ways to move towards solutions that work for all languages.

In order to do that, we have to accept the cost of little bits of growth being lost. I love the radical idea that what if Facebook was not profitable for one year with a shiny pile of cash? What if, for one year, they focused on making it safe? What would happen? What kind of infrastructure would get built?

I think there is a real thing. Until incentives change, Facebook will not change.

Q184       The Chair: I want to ask you about that. If I was in a charity that worked, say, with teenage girls who had self-harmed, and I said I have in this organisation the Facebook and Instagram profiles of lots of people who have interacted with the charity, and I want to reach out on Facebook and Instagram to other people who are like those people and see if we could help them before they do themselves too much harm—

Frances Haugen: That would be wonderful.

The Chair: I could go to Facebook and say, “Could I use a lookalike audience ad tool in order to reach those people?” It would happily sell that to me to do it. It has the data to find the closest possible data matches to young people who are self-abusing for the purpose of selling advertising. The way the platform is designed could not be simpler. You could ask the same question and say, “Why don’t you do more to reach out and help people who are in danger of self-harming? Why don’t you stop it? Why don’t you practically reach out?”

Yet not only do they not do that, they will sell you ads to do it but they will not do it themselves, and worse than that, they are continually feeding those self-same vulnerable people with content that is likely to make them even more vulnerable. I do not see how that is a company that is good, kind and conscientious.

Frances Haugen: There is a difference between systems. I always come back to the question of what the incentives are and what system those incentives create. I can only tell you what I saw at Facebook. I saw kind, conscientious people, but they were limited by the actions of the system they worked under. That is part of why regulation is so important.

You give an example of one amplification of interests. Facebook has run the experiment where they get exposed to very centrist interests like healthy recipes. Just by following the recommendations on Instagram they are led to anorexia content very fast, within a week. That is just by following the recommendations, because extreme, polarising content gets rewarded by engagement-based ranking.

I have never heard described what you have just described—using the lookalike tools that exist today. If you want to target ads today, you can take an audience, maybe people who have bought your product previously, and find a lookalike audience. It is very profitable. Advertisers love that tool.

I have never thought about using it to reach out critical content to people who might be in danger. Facebook loves to brag about how they built tools to protect kids or protect people who might have eating disorders. Those tools trigger in the order of hundreds of times a day—single hundreds, sometimes hundreds of thousands—hundreds globally. Unquestionably, Facebook should have to do things like that and have partnerships with people who can help connect them to vulnerable populations. You are right that it has the tools, but it just has not done it.

The Chair: When you were working in the civic integrity team, when it existed, could you have made a request like that to Facebook? Could you have said, “Look, we have identified some accounts here and these are people we think are very problematic. We would like to use some of the ad tools to try to identify more people like that”? Would that have been a conversation you could have had?

Frances Haugen: I have had that conversation. There is a concept known as “defensibility” inside the company, where they are very careful about any action that they believe is not defensible. It is very hesitant to do things that are statistically likely but are not proven to be bad.

Let us imagine that you found some terrorists, and you were looking for other people who were at risk of being recruited for terrorism, or cartels. This happens in Mexico; the platforms are used to recruit young people into cartels. You can imagine using a technique like that to help people who are at risk of being radicalised. Facebook would come back and say, “There is no guarantee that those people are at risk, and we should not label them in a negative way, because that would not be defensible”.

There are things where coming in and changing the incentives, making them articulate risks and how to fix those risks, would mean that it would shift its philosophies pretty rapidly on how to approach problems. It would be more willing to act.

The Chair: But it would be defensible to sell a terrorist guns, using an ad tool. Their known interest in that subject would be defensible to sell it for advertising, but not defensible to reach out for civic integrity.

Frances Haugen: I am not sure if Facebook allows gun sales.

The Chair: It was a hypothetical argument. You could feed someone’s addiction by advertising to them. You cannot use the same technology to reach out and help them.

Frances Haugen: It is actually worse than that. In my Senate testimony, one of the senators showed an example of an ad that was targeted at children. In the background of the ad was an image of a bunch of pills and tablets, very clearly a pile of drugs. It said something like, “If you want to have your best skills party this weekend, reach out”. A skills party, apparently, is youth code for a drug party. That ad got approved by Facebook. There is a real thing where Facebook says it has policies for things. It may have a policy that says, “We don’t sell guns”, but I bet there are tons of ads on the platform selling guns.

The Chair: It has a policy that it does not allow hate speech, but there is quite a lot of hate speech on Facebook.

Frances Haugen: Yes.

The Chair: You have one part of the business that with razor-like focus—probably more than has ever been created in human existence—can target people’s addictions through advertising. Yet the other part of the business that is to try to keep people safe is largely feeling around in the dark.

Frances Haugen: There is great asymmetry in the resources that are invested to grow the company versus those to keep it safe.

The Chair: From what you said, there is not just asymmetry; you cannot even go to the people with all the data and all the information and say, “Can we use some of your tools to help us do our job?” They would say that it was not defensible to do that.

Frances Haugen: I never saw the usage of a strategy like that, but it seems like a logical thing to do. Facebook should be using all the tools in its power to fight these problems, but it is not doing that today.

The Chair: I think a lot of people would extrapolate from that that, if the senior management team really wanted to do it, it would have been done, but for some reason it appears that it does not.

Frances Haugen: There are cultural issues. There are definitely issues at the highest levels of leadership, where they are not prioritising safety sufficiently. It is not enough to say, “Look, we invested $14 billion”. We need to come back and say, “No, you might have to spend twice that to have a safe platform”. The important part is that it should be designed to be safe. We should not have to plead with them for it to be safe.

Q185       Baroness Kidron: Frances, I was really struck earlier when you said that it knows more about abuse but there is a failure of a willingness to act. I think that was the phrase. It brought to my mind the particular case of Judy and Andy Thomas, whose daughter committed suicide, and who have struggled for the last year to get anything beyond an automated response for their request to have access to her account. I made a small intervention. They eventually got an answer, but it basically said, “No, you can’t”. I have to be clear, for legal reasons, that it did not say, “No, you can’t”. It was a very complicated legal answer, but it was saying that it had to protect the privacy of third parties on Frankie’s account.

Do you think that privacy defence is okay in this setting, or is the report and complaint piece another thing that we really need to look at? It seems pretty horrendous not to give grieving parents some sort of completion.

Frances Haugen: From what you have described, there is a really interesting distinction between private versus public content. It could have come in and said that at least for the public content that she viewed—the worldwide available content—it can show you that content. I think it probably should have come in and done that.

I would not be surprised if it no longer had the data. It deletes the data after 90 days. Unless she was a terrorist and it was tracking her, which I assume she was not, it would have lost all that history within 90 days of her passing. That is a recurrent thing that Facebook does. It knows that, whatever its sins, they will recede into the fog of time within 90 days.

Baroness Kidron: I am interested in the idea of user privacy being a reason for not giving a parent of a deceased child access to what they were seeingthat more thematic piece.

Frances Haugen: I think there is an unwillingness at Facebook to acknowledge that it is responsible to anyone. It does not disclose data. There are lots of ways to disclose data in a privacy-conscious way. You just have to want to do it. Facebook has shown over and over again not just that it does not want to release that data but that, even when it does, it often misleads people and lies in its construction. It did this with researchers a couple of months ago. It literally released misinformation, using assumptions, which it did not disclose to researchers, that were misleading.

In the case of those grieving parents, I am sure that Facebook has not thought holistically about the experience of parents who have had traumas on the platform. I am sure that her parents are not the only ones who have suffered that way. I think it is cruel of Facebook not to think about taking even minor responsibility after an event like that.

Q186       Baroness Kidron: A lot of colleagues have talked to you about children, and that is a particular interest of mine. The one thing that has not come up this afternoon is age assurance, specifically privacy-preserving age assurance. This is something that I am very worried about: age assurance, used badly, could drive more surveillance, and could drive more resistance to regulation. We need rules of the road, and I am interested to know your perspective on that.

Frances Haugen: It is a twofold situation. On one side, there are many algorithmic techniques that Facebook could be using to keep children off the platform, which do not involve asking for IDs or other forms of information disclosure.

Facebook does not currently disclose what it does, so as a society we cannot step in and say, “You actually have a much larger toolchest that you could be drawing on”. It also means that we do not understand what the privacy violations are that are happening today. We have no idea what it is doing.

Secondly, we could grading Facebook’s homework instead of relying on it grading its homework. Facebook has systems for estimating the age of any user. Within a year or two of them turning 13, enough of their actual age mates have now joined so that it can estimate accurately the real age of that person. In Facebook, you have to publish the protocols—how it does that—and the results going back a couple of years, and say how many 10 year-olds and how many 12 year-olds were on the platform one, two, three, four years ago. It knows this data today and it is not disclosing it to the public. That would be a forcing function to make it do better detection of young people on the platform.

Baroness Kidron: I have been ticking off a list while you have been speaking this afternoon: mandatory risk assessments and mitigation measures; mandatory routes of transparency; mandatory safety by design; moderation with humans in the loop, I think you said; algorithmic design; and application of its own policies. Will that set of things in the regulator’s hand, in the Bill we are looking at—all of that—keep children safe? Would it save lives? Would it stop abuse? Would it be enough?

Frances Haugen: It would unquestionably be a much, much safer platform if Facebook had to take the time to articulate risks. The second part is that it cannot just articulate its risks. It has to articulate its path to solve those risks, and it has to be mandated. It cannot do a half solution. It needs to give you a high-quality answer.

Remember, we would never accept a car company with five times the car accidents coming out and saying, “You know, we’re really sorry but brakes are so hard. Right. We’re going to get better”. We would never accept that answer, but we hear that from Facebook over and over again. Between having more mandatory transparency, being privacy conscious, and having a process for the conversation about what the problems and the solutions are, there is a path that should be resilient in moving us forward to a safer Facebook or any social media.

Baroness Kidron: Thank you.

Q187       Lord Knight of Weymouth: We have been informed that your comments to the media on end-to-end encryption have been misrepresented. I am interested in whether we on this committee should be concerned about whether there is a regulatory risk with end-to-end encryption. Certainly there are security risks to end-to-end encryption that some parts of government are concerned about. I would like to give you the opportunity to clarify your position, and if you have any comment for us on whether we should be concerned about that area, I would be grateful for that, too.

Frances Haugen: I want to be very, very clear: I was mischaracterised in the Telegraph yesterday on my opinions about end-to-end encryption. End-to-end encryption is where you encrypt information on a device. You send it over the internet and it is decrypted on another device. I am a strong supporter of access to open-source end-to-end encryption software. I am probably such an advocate for open-source software in this case because, if you are an activist, if you are someone who has a sensitive needa journalist or a whistleblower—your primary form of social software is an open-source end-to-end encryption chat platform. Part of why that open source part is so important is that you can see the code. Anyone can look at it. The top open-source end-to-end encryption platforms are some of the only ways you are allowed to do chat in, say, the Defense Department in the United States.

Facebook’s plan for end-to-end encryption is concerning, because we have no idea what it is going to do. We do not know what it means. We do not know if people’s privacy is actually protected. It is super-nuanced, and it is a different context. On the open-source end-to-end encryption product that I like to use, there is no directory where you can find 14 year-olds. There is no directory where you can find the Uighur community in Bangkok. On Facebook, it is trivially easy to access vulnerable populations, and there are nation state actors that are doing that.

I want to be clear: I am not against end-to-end encryption in Messenger, but I believe the public have a right to know what that even means. Is it really going to produce end-to-end encryption? If it says that it is doing end-to-end encryption and it does not really do that, people’s lives are in danger. I personally do not trust Facebook currently to tell the truth, and I am scared that it is waving its hands at a situation where it is concerned about various issues and it does not want to see the dangers any more. I am concerned about it misconstruing the product that it builds, and it needs regulatory oversight for that. That is my position on end-to-end encryption.

Lord Knight of Weymouth: To be really clear, there is a really important use case for end-to-end encryption in messaging, but if you ended up with an integration of some of the other things that you can do on Facebook with end-to-end encryption, you can create quite a dangerous place for certain vulnerable groups.

Frances Haugen: I think there are two sides. I want to be super-clear. I support access to end-to-end encryption and I use open-source end-to-end encryption every day. My social support network is currently on an open source end-to-end encryption service, but I am concerned on one side that the constellation of factors related to Facebook makes public oversight of how we do end-to-end encryption there even more necessary. That is things like access to the directory and the amplification settings.

The second thing is security. People might think that they are using an end-to-end encryption product, but Facebook’s interpretation of that is different from what an open-source product would do. We can all look at an open-source product and make sure that what it says on the label is in the can. But if Facebook claims that it has built an end-to-end encryption thing and there are vulnerabilities, people’s lives are on the line. That is what I am concerned about. We need public oversight of anything that Facebook does on end-to-end encryption, because it is making people feel safe when they might be in danger.

Lord Knight of Weymouth: Thank you very much.

Q188       Debbie Abrahams: I have a quick follow-up. Are you aware of any Facebook analysis in relation to the human cost of misinformation—for example, that Covid is a hoax or anti-vax misinformation? Has it done anything? Has it actually tried to quantify the actual human costs in terms of illness and deaths?

Frances Haugen: Facebook has done many studies. The misinformation burden is not shared evenly. The people most exposed to misinformation are recently widowed, are recently divorced or have moved to a new city. When you put people into those rabbit holes, when you pull people from mainstream beliefs into extreme beliefs, it cuts them off from their communities. If I began to believe in flat earth things and I have friends that are flat earthers, it makes it hard for me to reintegrate into my family. In the United States, the metaphor we often use is: is Thanksgiving dinner ruined? Did your relative consume too much misinformation on Facebook? Now that has become Thanksgiving dinner.

You can look at the social cost and the health cost. I will give you an example. Facebook underenforces on comments. Because comments are so much shorter, they are really hard for AI to figure out. Right now, groups like UNICEF have really struggled even with the free ad credits that Facebook has given them, because they promote positive information about the vaccine or about ways to cure yourself of Covid, and they get piled on in the comments. Facebook’s own documents talk about UNICEF saying, “How much more impact would those ad dollars have had if they had not been buried in toxic content?”

Debbie Abrahams: Thank you so much.

Q189       Dean Russell: I want to build on a comment you made earlier, if that is okay. You mentioned the idea of skills parties, if I heard it right. How do the platforms evolve when it comes to language? Obviously, we have talked about different types of languages, but within English, for example, there are huge differences between American English and English English, if you can call it that—British English. There is also the slang that is used.

When you mentioned that, it occurred to me that a few weeks ago someone on Facebook said they had met me in a pub previously and wished they had given me a Glasgow hug, which it turns out is worse than a Glasgow kiss. It actually means to stab me. Within the time of that being reported—I only found about it afterwards—it was reported initially to Facebook. It said that it did not break any of its rules. I believe someone else then reported it and a few others, and it eventually got taken down either by the page it was on or by Facebook. In that time, someone else had put on that they had met me during an election campaign and they wished they had given me a Glasgow hug as well; in other words, two people who wanted to stab me publicly were on the platform.

When it was reported and clarified to Facebook that a Glasgow hug meant to stab me, separate from a Glasgow kiss, which I think is a headbutt and is just as awful, do you know whether Facebook would learn from that? Would it know the next time somebody says they want to give someone a Glasgow hug that they mean to stab them, or would that just be lost in the ether?

Frances Haugen: I think it is likely that it would get lost in the ether. Facebook is very cautious about how it evolves lists for things like hate speech or threats. I did not see a great deal of regionalisation, which is what is necessary to do content-based interventions. I did not see it investing in the very high level of regionalisation that would be necessary to do that effectively.

I think there are interesting design questions if we as a community—government, academics, independent researchers—came together and said, “Let’s think about how Facebook could actually gather enough structured data to be able to get that case right”, or, as in the case of the insult for the other Member, how do you do that? If it took a strategy that was closer to what Google has done historically, it would likely have a substantially safer product.

Google has committed to be available in, I think, 5,000 languages. How do you make Google’s interfaces and the help content available in basically all the major languages in the world? It did that by investing in a community programme where it said, “We need the help of the community to make this accessible”. If Facebook invested the time and effort and collaborations with academia or with other researchers and the Government to figure out collaborative strategies for producing the structured data, we would have a way safer Facebook.

I do not support its current integrity strategies. Content-based solutions are not great, but they could be so much better than they are today. Let us make the platform safer, and if you want to continue using AI, let us figure out how to do it in a way that protects you from Scottish English, not just from American English.

Q190       Dean Russell: I have a further question on that, because for me that builds into the question of the evolution of these platforms and the learning they have, but also future-proofing the Bill against future changes.

At the moment, we talk a lot about Facebook, Google, Twitter and all those things being looked at on a screen, but increasingly they are working in the realms of virtual reality—there is also Oculus Rift—which will increasingly enable user-generated content or engagement. Do you know whether any of the principles of safety and reducing harm that we are talking about here are being discussed for those future innovations? My concern, to be honest, is that we will get this Bill right and then the world will shift to a different type of use of platforms and we will not have covered the bases properly.

Frances Haugen: I am actually a little excited about augmented reality, because often augmented reality attempts to recreate interactions that exist in physical reality. In this room, we have maybe 40 people total, and the interactions that we have socially are on a human scale. Most augmented reality experiences that I have seen have been more about trying to recreate the dynamics of an individual. They are either games they play or communications with one person or maybe a handful of people. Those systems have a very different consequence from the hyper-amplification systems that Facebook has built today.

The danger with Facebook is not individuals saying bad things. It is systems of amplification that disproportionately give people saying extreme polarising things the largest megaphone in the room. I agree with you that we have to be careful about thinking about future-proofing, but there are the mechanisms that we talked about earlier such as the idea of having risk assessments, and risk assessments that are not just produced by the company but are the regulator gathering from the community and saying, “Are there other things we should be concerned about?”

A tandem approach like that, which requires companies to articulate their solutions, is a flexible approach. That might work for quite a long time, but it has to be mandatory and there have to be certain quality bars, because if Facebook can phone it in I guarantee they will phone it in.

Dean Russell: Thank you.

Q191       The Chair: I have a couple of final questions. In the evidence session last week, based on his experience at YouTube, Guillaume Chaslot said that the way algorithmic recommendation works is that it is not just there to give you more of what you want; it is there to discover which rabbit hole you should be pushed into. Do you think that is a fair characterisation?

Frances Haugen: There is a difference between the intended goals of a system—as if Facebook has said, “We never intended to make a system that amplifies extreme polarising content”—and the consequences of a system. All recommender systems intend to give you content that you will enjoy because, as Facebook has said, that will keep you on the platform longer. The reality is that algorithmic systems, AI systems, are very complicated and we are bad at assessing the consequences or foreseeing what they are going to be, but they are very attractive to use because they keep you on the site longer. If we went back to chronological ranking, I bet you would view 20% less content every day, but you might enjoy it more; it would be more from your friends and family.

As to the question on rabbit holes, I do not think it intended to have you go down rabbit holes. I do not think it intended to force people into these bubbles, although it has chosen choices that have unintended side-effects. I will give you an example. Autoplay on YouTube is super dangerous. Instead of you choosing what you want to engage with, Autoplay chooses for you, and it keeps you in a stream, a flow, where it just keeps you going. There is no conscious action of continuing or picking things, or whether or not to stop. That is where the rabbit holes come from.

The Chair: The equivalent on Facebook, from what you said earlier, is that someone signs you up to a group without your consent that is focused on anti-vax conspiracy theories and Covid.

Frances Haugen: Yes.

The Chair: You engage with one of the postings that you see in your news feed, although you did not ask for it. You are now automatically a member of the group. Probably, it is not just that you get more content from that group, but it is quite an interesting deviation from something you have done before and probably the whole system will give you more of that kind of content. That is what I meant about the system recognising an interesting new line of inquiry from a user and then piling in on that with more stuff.

Frances Haugen: I think that is what is so scary. There has been some reporting on a story about a test user. Facebook has said that it takes two to tango. Nick Clegg wrote a post back in March saying, “Don’t blame us for the extreme content you see on Facebook. You chose your friends. You chose your interests. It takes two to tango”. When you make a brand-new account and you follow some extreme interests—for example, Fox News and Trump and Melania—it will lead you very rapidly to QAnon. It will lead you very rapidly to white genocide content. But this is not just true on the right; it is true on the left as well. These systems will lead to amplification and division. I think you are right: the system wants to find the content that will make you engage more, and that is extreme, polarising content.

The Chair: To claim as Nick Clegg did in that situation that it takes two to tango makes it seem like it is your fault that you are seeing all this stuff. It is, again, a massive misrepresentation of the way the company actually works.

Frances Haugen: Facebook is very good at dancing with data. It has very good communicators, and the reality is that the business model is leading them to dangerous actions.

The Chair: If it takes two to tango, the other party is Facebook, not another user, actually.

Frances Haugen: Yes.

Q192       The Chair: On fake accounts, we heard from Sophie Zhang last week about her work identifying networks of inauthentic activity. Based on your work at Facebook, how big a problem do you think that is in things like civic integrity around elections? Sophie’s evidence was that we are talking networks of hundreds of thousands of accounts that have been taken down. How much of a problem is it in your area of work?

Frances Haugen: I am extremely worried about fake accounts. I want to give you guys some context on a taxonomy around fake accounts. There are bots. These things are automated. Facebook is reasonably good at detecting bots. It is not perfect, but it is reasonably good. Then there are things called manually driven fake accounts. For example, there are cottage industries in certain pockets of the world. There are some parts of Pakistan and Africa where people have realised that you can pay a child a dollar to play with an account like a 12 year-old or to be a fake 35 year-old for a month. During that window, you will pass the window of scrutiny at Facebook and look like a real human because you are a real human. That account can be resold to someone else because it now looks like a real human account.

Back when I left in May, there were approximately 800,000 Facebook connectivity accounts. These are accounts where Facebook is subsidising your internet. Among those, 100,000 manually driven fake accounts were discovered by a colleague of mine. They were being used for some of the worst offences on the platform. I think there is a huge problem with Facebook’s level of investment in detecting these accounts and preventing them from spreading harm on the platform.

The Chair: How confident are you that the number of active users on Facebook is accurate, that those people are real people?

Frances Haugen: There are interesting things about the general numbers. As we talked about before, this is about the distribution of things. On social networks, things are not necessarily evenly allocated. Facebook has published a number. It believes that 11% of its accounts are not people but duplicates. Among new accounts, it believes that the number is closer to 60%, but that has never been disclosed in a public statement, in my awareness. There is this question: if investors are interpreting the value of the company based on a certain number of new accounts every month, and 60% of those are not actually new people, they are overinflating the value of the company.

The Chair: If those audiences are being sold to advertisers as real people, if you are selling as real people to advertisers people you have reason to believe are probably fake, that is fraudulent.

Frances Haugen: There is a problem known as SUMAs, which are same-user multiple accounts. I found documentation for region frequency advertising. Let us say you are targeting a very specific population. Maybe they are highly affluent and slightly quirky individuals, and you are going to sell them some very specific product. Facebook is amazing for those niches. Maybe there are only 100,000 people in the United States you want to reach, but you can get all of them.

Facebook has put in controls called region frequency advertising. You say, “I don’t want to reach someone more than seven times”, or maybe 10 times, because the 30th impression is not very effective. Facebook’s internal research says that the systems that reach frequency advertising systems were not accurate because they did not take into consideration those same-user multiple account effects. That is definitely Facebook overcharging people for its product.

The Chair: That presumably works in the same way in Instagram as well.

Frances Haugen: I am sure it does.

The Chair: Concerns are being raised in particular about young people encouraged to have duplicate accounts on Instagram. Do you share that concern from a safety point of view?

Frances Haugen: I was present for multiple conversations during my time in civic integrity, where they discussed this. On Facebook, the real names policy and their authenticity policies are security features. On Instagram, because it did not have that same contract, many accounts would have been taken down on Facebook for co-ordinated behaviour and other things that, because they were inauthentic, it was harder to take down. In the case of teenagers, encouraging them to make private accounts so that their parents cannot understand what is happening in their lives is really dangerous, and there should be more family-centric integrity interventions that think about the family as an ecosystem.

The Chair: Yes, because, as you say, a young person engaging with harmful content—problematic content—would probably do it using a different account, while their parents see the one that they think they should have. Do you think that policy needs to change? Do you think the system can be made to work on Instagram as it does today?

Frances Haugen: I do not think I know enough about Instagram’s behaviour in that way to give a good opinion.

The Chair: But as a kind of concerned citizen who has worked in technology?

Frances Haugen: I strongly believe that Facebook is not transparent enough today and that it is difficult for us to figure out the right thing to do, because we are not told accurate information about how the system itself works, and that is unacceptable.

The Chair: I think we would agree with that. That is a good summation of a lot of what we have been talking about this afternoon. That concludes the questions from the committee, so we would just like to thank you for your evidence and for taking the trouble to visit us here in Westminster.

Frances Haugen: Thank you so much for that.