HoC 85mm(Green).tif

Digital, Culture, Media and Sport International Grand Committee

Oral evidence: Disinformation and fake news, HC 363

Tuesday 27 November 2018

Ordered by the House of Commons to be published on 27 November 2018.

Watch the meeting

 

Members present: Damian Collins (Chair); Clive Efford; Julie Elliott; Paul Farrelly; Simon Hart; Julian Knight; Ian C. Lucas; Brendan O'Hara; Rebecca Pow; Jo Stevens; Giles Watling.

Members from overseas Parliaments present: Leopoldo Moreau (Argentina), Nele Lijnen (Belgium), Alessandro Molon (Brazil), Bob Zimmer (Canada), Nathaniel Erskine-Smith (Canada), Charlie Angus (Canada), Catherine Morin-Desailly (France), Hildegarde Naughton (Republic of Ireland), Eamon Ryan (Republic of Ireland), Dr Inese Lībiņa-Egnere (Latvia), Pritam Singh (Singapore), Edwin Tong (Singapore), and Sun Xueling (Singapore).

 

Questions 4131 - 4273

Witness

I: Richard Allan, Vice President of Policy Solutions, Facebook.

 

 

 


Examination of witness

Witness: Richard Allan.

Q4131  Chair: Good morning. I call this meeting of the Select Committee on Digital, Culture, Media and Sport to order. We are meeting with our international guests in this Grand Committee. We welcome Members of the Parliaments of Argentina, Canada, France, Singapore, Ireland, Belgium, Brazil and Latvia.

This is the first time that multiple Committees have joined together for a hearing in the House of Commons since 1933, when the Joint Committee on Indian Constitutional Reform invited parliamentarians from India to join the Committee. We can therefore say that this is the first time that parliamentarians have gathered from such a number of international jurisdictions in one place. The fact that this meeting is taking place with representatives from right across the globe shows how seriously we and our colleagues in other Parliaments take these issues.

I welcome Lord Allan from Facebook, who is giving evidence to us today. As you can see from the name plate beside you, we were still rather hoping that your boss might make a surprise appearance. Nevertheless, we are grateful to have you with us.

I call Charlie Angus to ask the first questions.

Charlie Angus: Thank you, Chair, and thank you, Mr Allan. I would say at the outset how deeply disappointed we are about Mark Zuckerberg’s decision to ignore the summons from so many different nations. This is an unprecedented situation that we are dealing with—this many parliamentarians and legislators.

As my first question, I would like to ask you about the corporate decision of Facebook to blow off this meeting with Mark Zuckerberg. How was that arrived at? Who gave Mr Zuckerberg the advice to ignore this Committee?

Richard Allan: I wouldn’t characterise the decision as a blowing off. It was a decision that we took to try to understand—

Charlie Angus: You say it is not blowing off and you are trying to understand. We are trying to understand who advised Mr Zuckerberg. Was that his decision or did Facebook say, to protect Mr Zuckerberg, “Stay away from this meeting?”

Richard Allan: I will take responsibility for decision making around appearances in front of committees. For the record, this year we have appeared in front of numerous committees of different institutions, including Mr Zuckerberg himself appearing before two congressional committees and the European Parliament, and—there are other representatives from different Parliaments here—for example, in Ireland, and we have appeared in front of committees in Canada. Our intent is to be there, to answer the questions that you have for us.

Charlie Angus: But not Mr Zuckerberg?

Richard Allan: For some appearances, Mr Zuckerberg has made himself available, but not for every appearance, and apologies for that.

Charlie Angus: I guess for me, being from Canada, we see ourselves as in the Westminster tradition, which is the heart of the democratic system to us in the west. The Westminster tradition has seen many threats, bumps and bruises over the centuries but we have never seen anything quite like Facebook where, while we have been playing on our phones and apps, our democratic institutions and our civil conversation seem to have been upended by frat boy billionaires from California. So Mr Zuckerberg’s decision not to appear here at Westminster to me speaks volumes, particularly since this has been the subject of an investigation into whether Facebook apps upended one of the most important votes in British history. When Mr Zuckerberg says that the plan was to move fast and break things, and that breaking may have involved our democratic institutions, does he not believe or not think that parliamentarians around the world are willing to push back?

Richard Allan: Just to be clear, Mr Zuckerberg has gone on the record to say that addressing some of the valid issues you have raised around the impact of connectivity in general, and of Facebook in particular, on elections of the democratic process is a top priority for the company. He personally has been on the record discussing the kind of solutions we need and he is directing the work. Indeed, as we sit here today there are reviews of a Facebook engineering product going on that he is leading and which will help here.

Charlie Angus: Let’s just go into what he is on the record on. We understand that in response to this unprecedented bad corporate publicity—I think you would admit that we have never seen a corporation under the spotlight like this—Mr Zuckerberg hired the Republican party-linked Definers. We learned from that that one of the solutions in response to this bad press was to try to link George Soros—there were a lot of issues around Soros and antisemitic conspiracy theory—and then to suggest that attacking Facebook was somehow in itself antisemitic. Don’t you see that by using the same tactics of misinformation, rather than being accountable, you have lost public trust and cannot be trusted to police yourselves? Has not Mr Zuckerberg put your company in that position?

Richard Allan: I’m not going to disagree with you that we have damaged public trust with some of the actions we have taken. Again, just for the record, on the issue of the hiring of this outside firm, Mr Zuckerberg himself said that that was not what he expected of us. He was not personally responsible for the hiring. Elliot Schrage, who runs policy and comms at Facebook, has publicly stated that he was responsible and takes responsibility for it. Mr Zuckerberg has given instructions for us now, as a team, to look very carefully at all the relationships we have with different external agencies to make sure that we behave—

Charlie Angus: That’s wonderful. I’m sorry to cut you off but I have to be respectful of my colleagues. It’s this thing that Facebook is always sorry, they are always on a journey. We asked the head of Facebook Canada about the allegations of genocide in Myanmar, where the international community spoke out again and again and again about the mass killings that were happening. The Facebook rep for Canada said, “We admit we’re not perfect. We’re on a journey.” We are not asking you to be perfect; we are asking your company to be accountable when issues come up, such as genocide, such as misinformation. Once again, “Mr Zuckerberg is looking into this.” We don’t know that Mr Zuckerberg is looking into this because he has refused to show up to speak to parliamentarians from around the world. I put it to you that you have lost the trust of the international community to self-police and that we have to start looking at a method of holding you and your company to account, because Mr Zuckerberg, who is not here, does not appear willing to do the job himself.

Richard Allan: Again, I am going to agree with you. One of the areas that I am working on right now is precisely to understand the kind of regulatory framework that is in everyone’s interest. We have accepted, and Mr Zuckerberg has said himself that we accept, that this requires a regulatory framework and action by responsible companies like ours. It is the two in tandem, and as we go on to discuss false news and elections, I think the regulatory piece is going to be a really important part of that.

Chair: I don’t think it is up to Facebook to determine what regulatory structure it should be under. It should be up to Parliaments to determine that and that is why we here. I call Ian Lucas.

Q4132  Ian C. Lucas: Can you confirm that Facebook first learned of the GSR/Cambridge Analytica data incident from the press in December 2015?

Richard Allan: I can confirm that is when I first learned about that incident, yes, and I think that is where people generally in the company who were following these issues would learn about it—from the press.

Q4133  Ian C. Lucas: When did Mark Zuckerberg first know of the GSR/Cambridge Analytica incident?

Richard Allan: Later, when it became—

Q4134  Ian C. Lucas: Do you know the precise date?

Richard Allan: I don’t know the precise date, but what I do know is—

Q4135  Ian C. Lucas: Have you asked Mr Zuckerberg when that precise date was?

Richard Allan: My understanding, and we have given responses to your Committee, was that this was the more recent round—

Q4136  Ian C. Lucas: No, I’m sorry, Lord Allan. You haven't given responses to this Committee. I asked Mike Schroepfer in April this precise question and he said he didn’t know. He told me the buck stopped with Mark Zuckerberg. I wanted Mr Zuckerberg to come to answer this question. You have had six months’ notice of this question and I would have expected you to have been able to answer it today, so I am going to ask it again. When did Mark Zuckerberg first know, precisely, of the GSR/Cambridge Analytica incident?

Richard Allan: We have, with respect, provided written answers to your Committee following Mike Schroepfer’s session. My understanding from those written answers—I will double check—is that it was March 2018, when the second round of stories occurred, that he was made aware of this situation. There were others of us who were closer to what was happening in the United Kingdom who had read the original Guardian stories by Harry Davies in December 2015.

Q4137  Ian C. Lucas: Can you give me an example of a data incident or breach other than the GSR/Cambridge Analytica case where Facebook has taken action?

Richard Allan: We have taken action against a number of applications that failed to meet our policies. Those policies cover a range of issues, both the behaviour of the application and their use of data.

Q4138  Ian C. Lucas: Can you name one case?

Richard Allan: I will come back to you on that, if I may. There have been other applications that we have disabled.

Q4139  Ian C. Lucas: Can I quote to you Mark Zuckerberg’s evidence to the US Senate earlier this year? He was asked this question then and he said, “I don’t have all the examples of apps that we have banned here, but if you would like, I can have my team follow up after this.” He gave the Senate the same answer you are giving me now.

Richard Allan: And again, for the record, we have answered thousands of questions that came from different parts of the United States Congress.

Q4140  Ian C. Lucas: Lord Allan, that is a pretty important question. Did he supply a list? Presumably, Mark Zuckerberg respects the US Senate.

Richard Allan: Absolutely.

Q4141  Ian C. Lucas: Did he supply a list?

Richard Allan: I don’t have in front of me today all of the answers to all of the questions.

Q4142  Ian C. Lucas: Did he supply a list? Let me answer that. He didn’t supply a list and we still do not have the details of any company that was banned by Facebook on that basis. Do you know who Joseph Chancellor is?

Richard Allan: Yes.

Q4143  Ian C. Lucas: You know that he was employed by Facebook.

Richard Allan: Yes.

Q4144  Ian C. Lucas: Why was he employed by Facebook?

Richard Allan: Mr Chancellor, as I understand it, is somebody who had a track record as an academic working on relevant areas.

Q4145  Ian C. Lucas: You know that he was a co-founder of GSR.

Richard Allan: Yes.

Q4146  Ian C. Lucas: So GSR, in December 2015, was the source of the breach to Cambridge Analytica, and Joseph Chancellor was an employee of the company at that time—of Facebook.

Richard Allan: Yes.

Q4147  Ian C. Lucas: So what action did you take against Joseph Chancellor at that time?

Richard Allan: We have not taken action against Joseph Chancellor.

Q4148  Ian C. Lucas: But this was an extremely serious incident, wasn’t it?

Richard Allan: Again, the incident related to GSR occurred before Mr Chancellor’s employment with Facebook.

Q4149  Ian C. Lucas: Yes, but you have said some pretty negative things about Aleksandr Kogan, who was the partner of Joseph Chancellor, but you didn’t take any steps at all against Joseph Chancellor.

Richard Allan: I am not aware of us taking any steps against Mr Chancellor.

Q4150  Ian C. Lucas: Well, you didn’t because he was employed until earlier this year by Facebook. Isn't it the case that you don’t actually take steps against app developers when they pass on information?

Richard Allan: That is not true. Let me be very clear about how the system works. We have a social network. The social network has the data of individuals on it. The individual chooses when they wish to install a third-party application. Our expectation is that all third-party applications that access Facebook data have their own privacy policies and comply with privacy law just as we are required to do, and that they behave in a reputable way. If any information leads us to believe that that is not the case, we will prevent access to our platform by those applications.

Q4151  Ian C. Lucas: You still haven’t given me an example of a single case where you have done that.

Richard Allan: I will check the records, Mr Lucas, and come back to you.

Q4152  Ian C. Lucas: But the fact is that there aren’t any of these cases. We have asked repeatedly for them. May I just quote you a section of evidence that Aleksandr Kogan gave to us? The Chair asked him a question about Joseph Chancellor; he asked whether it wasn’t a bit odd that Joseph Chancellor was being employed by Facebook. Aleksandr Kogan did not think it was odd. He said, “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things, because I think they realise that the platform has been mined left and right by thousands of others.” Isn’t that the truth?

Richard Allan: Again, I don’t accept that characterisation. Our terms are quite clear for our expectations of third-party developers. If, as was the case with GSR, they breach those terms, they potentially find themselves in trouble not just with us—

Q4153  Ian C. Lucas: But you still have not given me an example.

Richard Allan: I will come back to you, Mr Lucas, with an example. I do not want to defame—

Q4154  Ian C. Lucas: It is incredible. You haven’t given me an example.

Richard Allan: I will come back to you.

Q4155  Ian C. Lucas: We have had repeated hearings. You are the third Facebook person to come to our Committee, and you still cannot give me one example of Facebook banning someone for sharing data.

Richard Allan: I will come back to you with an example.

Q4156  Ian C. Lucas: There aren’t any examples.

Richard Allan: We have banned very many apps over the years for a variety of forms of abuse.

Q4157  Ian C. Lucas: Why when Congress asked you for a list did you not provide one?

Richard Allan: I have said I will check whether or not we did provide that. I know that we provided answers to thousands of questions to Congress.

Q4158  Ian C. Lucas: Isn’t the truth in all this that you knew exactly what was going on, because the whole model is about sharing information? You knew that app developers were sharing information and the only time you ever took action is when you were found out.

Richard Allan: Again, to be very clear about the app model, it is something that is intended to add value for Facebook, for the developer, and for the user. The user gets a feature that we are not going to provide to them. If the feature is useful they will install the application and choose to share their data with the application—so, for example, they share their photos so that they can adjust them and make them more interesting and fun, and they will share them back to Facebook. That is the normal behaviour of a Facebook application.

Q4159  Ian C. Lucas: You want access to info from the developers, and you get access to that information from them. That is why Facebook has developed as fast as it has.

Richard Allan: No, the intention and the business model—it is a business—is a win-win-win. The developer gets to build a business much more easily than they could otherwise do, and lots of people have built great businesses on Facebook, including from the countries represented here. Instead of having to build their own social network of hundreds of millions of people, they can build their application and get it out there. That is great for the developer. It is great for the Facebook user, who gets additional features that they otherwise wouldn’t have. And yes, it is good for us because, as in my example, if someone has created a new fun photo and they share it back to Facebook, more people will engage with that photo, and we get more activity on the platform.

Q4160  Ian C. Lucas: Lord Allan, have you seen “Casablanca”?

Richard Allan: I have seen “Casablanca”.

Q4161  Ian C. Lucas: You know what this reminds me of? There is a wonderful scene involving Claude Rains, when the police are called to Rick’s café because gambling is taking place; he closes the café because gambling is taking place, and he is then delivered his winnings. I think Facebook has known throughout this exactly what was going on in terms of passing information. I think Facebook has told us again and again a tissue of lies about the way your company operates.

Richard Allan: I would contest that.

Q4162  Ian C. Lucas: Give me one example of a business that you have banned because they have shared information.

Richard Allan: Beyond GSR?             

Q4163  Ian C. Lucas: Beyond GSR.

Richard Allan: I will come back to you. I don’t want to mention the name of a business that then turns out to be inaccurate. I will come back to you.

Q4164  Chair: This Committee and colleagues have asked repeated questions about Russian activity on Facebook, and in particular about knowledge of the Russian ads that ran during the presidential election in America in 2016. When we asked those questions, why didn’t Facebook disclose that as a company it knew those ads were being placed and run before it was reported to the US Senate?

Richard Allan: I am trying to understand—

Q4165  Chair: When we took evidence in February from Simon Milner, we asked him why they didn’t look for activity of Russian ads and why they didn’t report it, and he said it was something the company didn’t know it had to look for. But we know from a recent investigation by The New York Times that Facebook as a company was aware much, much earlier that these ads were being placed and run. They didn’t report it to anyone at the time, and during the period when we were asking questions about it, no one disclosed that the company was aware of this information earlier. The impression given was that it wasn’t until the request was made by the US Senate Intelligence Committee, and Facebook was then asked to look for this activity, that they knew it existed. The company had known a long time before that it existed, but never disclosed that fact. Why was that?

Richard Allan: Again, the sequence of events, as I understand it, is that we were aware there was a group, commonly known as APT28, that had been targeting people on the Facebook platform—actually, some time ago. They have also targeted people in other countries—in France, there was an incident. This is a group based in Russia—

Q4166  Chair: I don’t want to get too much into the history of it. This is a simple question. Can you confirm that Facebook as a company knew about this Russian activity in terms of ad buying before it was reported by the US Senate?

Richard Allan: Again, there are two different pieces here. One is this group trying to target political accounts: classic hacking mode. The second part of it is information that we received to say that people thought that some operatives from this group were involved in some American political pages, and that those pages were buying ads. That is what subsequently came out. So there were two different activities carried out by the same group. The one that we were aware of earliest was attempts to hack accounts, so we became aware of this group; we understood something about their operations. We subsequently received information to suggest that the same group had created some of these pages and were using the pages and ads to sow civil discord.

Q4167  Chair: We see a consistent pattern of Facebook failing to disclose relevant information of considerable public interest when you are asked questions about things like Russian activity. This information about the company’s knowledge came into the public domain because of a separate investigation, not in the multiple hearings that the company has attended where it has been asked these questions. Can you not see that this has caused a massive breach of trust not just between Parliaments and the company, but with the many people who watch these deliberations as well? I’m not sure there is really an answer to that question. I think that that information should have been disclosed. Are there other incidents of Russian activity on Facebook in terms of accessing data that we should be aware of?

Richard Allan: Our policy at the moment is that once we have confirmed information about any attempts at interference, whoever that should be from, and once we have investigated and understand accurately what has occurred, we will publish it. You will know, if you have been following the activities of our security team, that over the last few months we have published several quite extensive reports describing attempts to create false properties on Facebook from both Russians and, more recently, from Iranian sources. In the United Kingdom, for example, there was an attempt by a group of Iranian operatives, we believe, to create false information on Facebook. We researched that, we found the evidence, we published it. So our position right now is that where we have information, we will make sure we verify it, but we will put it into the full public domain, obviously withholding any details that we need in order to be able to prevent further attacks.

Q4168  Chair: You will be aware that the DCMS Committee has received substantial documents from Six4Three, an app developer company in California. We don’t intend to publish those documents today—we are not in a position to do that—but there is one item within those documents that I think is of considerable public interest and I think relates to the point you have made. I want to put this to you. An engineer at Facebook notified the company in October 2014 that entities with Russian IP addresses had been using a Pinterest API key to pull over 3 billion data points a day through the ordered friends API. Was that reported to any external body at the time?

Richard Allan: Again, to set expectations around the emails, which I understand you have and we have exchanged correspondence over, there is, as I understand it, a partial set of information that was obtained by a hostile litigant who is repeatedly seeking to overturn the very changes to restrict access to data that I think you as a Committee and others would want to see.

Q4169  Chair: I don’t want you to use this opportunity just to attack the litigant, who is not in a position to speak for themselves because they have been told they cannot speak publicly about these matters. I want you to address the question, and if you do not have the answer to it, I would like Facebook to report back to the Committee to say what internal process it ran when this was reported to the company by an engineer, and whether it notified external agencies of this activity. If Russian IP addresses were pulling down a huge amount of data from the platform, was that reported or was that, as so often seems to be the case, just kept within the family and not talked about?

Richard Allan: The context I am giving is really to say that any information you have seen that is contained within that cache of emails is at best partial and at worst potentially misleading. On the specific question whether we believe, based on our subsequent investigations, there was activity by Russians at that time, I will come back to you, but I do want to set that context. Those emails are unverified, partial accounts from a source who has a particular angle.

Q4170  Chair: Okay, we will move on. The document I referred to is an email from a Facebook engineer. It is an internal company document, not just someone’s interpretation of those events, but we will move on. When Facebook changed its criteria for working with app developers in 2014-15, did it create a white list of developers that had privileged access to data and a list of other developers that did not?

Richard Allan: We continued, through those changes, to ensure that developers who had a particular requirement could have a soft landing. The intention of the change was to restrict access to data through the API. From a technical point of view, there was one version a developer could use to access data—one version of the code that allowed this broader access, and that is the problem that we found with the GSR application. There was a new version of the code, which narrowed the forms of access you could have, and we moved people from version 1 to version 2. There were some developers who needed additional time to move from version 1 to version 2, and we gave them that time where we thought that was justified. They are all now on version 2.

Q4171  Chair: So am I right in saying that there were certain developers that continued to have full access to Facebook data after the platform policy changes came into place?

Richard Allan: Again, I want to be very precise about “full access to Facebook data”. What that meant was that when somebody chose to install the application under version 1, they could choose to give the developer access to a broader range of information, including information from their friends. They continued to have—

Q4172  Chair: So are you contesting that the user knew? What you are saying is there were some categories of apps that had full access to data and others that did not. Yes or no?

Richard Allan: Well, I just want to qualify “full access to data”. That sounds like there is a fire hose of data you can access. The API was never like that. [Interruption.] No, no, developers never had that. What developers had under the first version was the ability to ask you to install their application, and if you agreed to it and agreed to certain permissions, they could also access some of the information that your friends shared with you. Version 2 stopped that. In neither version was it full access to data. In version 1 it included some access to friends’ data where they had given permission. In version 2 that access was removed.

Q4173  Chair: This is the last question from me. Did Facebook have a policy of reciprocity with developers—“You give us all your data and we’ll give you all ours”—for the actions on the platform?

Richard Allan: No, again, to be clear—

Q4174  Chair: No?

Richard Allan: That’s not the policy. Again, to understand—

Q4175  Chair: At that time—before 2014-15—did you have a policy of full reciprocity with developers for data?

Richard Allan: Again, part of our developer policy said that we had an expectation that the developer would allow people to share their content back to Facebook. To try to use an example to make it concrete, I could develop an application that said, “Christmas photo application. Take your Facebook photos; I will put Christmas decorations on them. That’s what my application does.” What reciprocity meant for us was that that developer should offer the user the ability to share the photo back to Facebook; otherwise, it is simply taking data out—extractive—and there is no value back to the Facebook community. In practice, we found that developers generally do that anyway, so there isn’t an issue. There was a period of time when our developer policy made it clear to developers that our expectation was that they would allow people to share the data back to the platform.

Q4176  Chair: Fine. I think we will bring in some other people at this point.

Catherine Morin-Desailly: Facebook is accountable for what has happened, which is a huge scandal. I wanted to say that to start with. Fortunately, world public opinion is increasingly aware of what is happening with platforms in general and social media platforms in particular—especially Facebook. I want you to repeat what Facebook’s decision on data and privacy controls, which led to the Cambridge Analytica scandal, was.

Richard Allan: Again, we have a Facebook platform that dates back to 2007. It allowed a developer to ask a user to install their application, and then they would get access to certain data. In the first version, that could include access to data that friends had shared with the installing user. The Cambridge Analytica scandal comes from the fact that a developer built an application called “This Is Your Digital Life”. The developer was Aleksandr Kogan with the firm GSR in Cambridge. They used that install method to collect the data of a large number of people. They did that, according to evidence heard by that committee, at the request of Cambridge Analytica, and they shared that data.

Catherine Morin-Desailly: Had Facebook users been warned that their data would be harvested like that?

Richard Allan: Yes. This is a matter—I think you are hearing from the Information Commissioner’s Office later—of legal discussion around whether or not the guidance we gave was sufficient. We believe that it was. When you sign up for Facebook, it said, “You may install applications.” There were controls where users could say—

Catherine Morin-Desailly: Do you think it is quite normal to target political accounts or users like that?

Richard Allan: To separate the issues, the activity that took place was abusive according to our terms. It shouldn’t have happened. I want to be very clear on that. The specific use of the data by Cambridge Analytica was abusive. We don’t believe that the fact that an application may get your data was either illegal or improper. We believe people were aware of it at the time.

Catherine Morin-Desailly: How can you account for the fact that shadow profiles were created? That meant that the data of people who were not Facebook users were harvested as well. Do you think it is quite normal to do that?

Richard Allan: We don’t do it. We don’t create shadow profiles.

Catherine Morin-Desailly: But it appears that there were shadow profiles, including medical data.

Richard Allan: That’s not true. From our point of view, we don’t create shadow profiles. People talk about it. If it is helpful, I want to be clear about what data we do have from non-users.

Catherine Morin-Desailly: But Mark Zuckerberg confirmed in front of Congress that there were shadow profiles.

Richard Allan: No, there are not shadow profiles. There is a certain amount of non-user data that sits on Facebook servers. It can come from two main sources. One is that if you upload your contacts, they sit on Facebook servers. We also collect data based on the IP addresses of people who visit sites with Facebook plug-ins. We don’t use that data to create shadow profiles or to target advertising but, yes, non-user data is sitting on Facebook servers because of the way our technology works.

Q4177  Chair: Eamon Ryan.

Eamon Ryan: I apologise; my computer went off. We are all online all the time, and it is important that we get the rules right. I was just going back to some of our joint Oireachtas hearings, when Mr Joel Kaplan from Facebook came over to answer some similar questions. I want to go back to this question. In 2011 and 2012, the Irish data protection commissioner looked, in an audit of Facebook, for an end to the possibility for a developer to access other people’s data. They said in their presentation that they were persisting in that and that they had considered taking a legal challenge, but had decided to continue with the iterative process for fear that a legal challenge would take too long to close the loophole. We didn’t get a really clear answer to why Facebook ignored and fought back against the data commissioner’s recommendation. I want to know—if you cannot tell us now, provide details afterwards—where was that policy decision made, at what level in the company and by whom? The same would apply for when you found out in 2015 about this huge and not insignificant problem—again, at what level in the company was it decided not to inform the Irish Data Protection Commission, and by whom?

Richard Allan: I was personally involved in a lot of those conversations, having been at the company back when the original complaints were made. The view at the time was that the Irish Data Protection Commission was giving us strong advice, but was not declaring the activity illegal. That allowed us to make a determination about whether we wanted to contest that, or whether we wanted to accept its strong advice—again, the advice is repeated and on the public record. There was a view that the platform was working well at the time, that people were getting benefit from it. Remember that at the time we had not had some of the scandals that we are talking about now, with abuse of platform data. The decision was taken, with the data protection team in Facebook Ireland and the broader company, to say, “Look, if we’re not compelled to make this change, we will choose not to make it at this stage.”

Eamon Ryan: I do not know what possible benefit can accrue to anyone having that access to other people’s information or data. No matter who the individual user is, there is a fundamental injustice in allowing such access to data. I simplify it down to this question, which you could say is how it should be looked at: was the decision made in Europe or in California?

Richard Allan: The decision was made ultimately by Facebook Ireland. They are the team responsible for our compliance with Irish data protection law and the Irish Data Protection Commission, so they ultimately have the decision-making power over anything that relates to the processing of data within that legal framework. Obviously, a wide range of people in the company are consulted about any of those decisions.

Eamon Ryan: You have now started to roll out in America and the UK, as I understand it, that there will be transparency around political online advertising. You have also agreed that you will provide something similar for the next European elections, although the conditions and timelines are not yet clear. I am interested in what happens if we have an election before then in any country outside the US or the UK, where these provisions are already in place. Why can we not have such transparency immediately? This has been the biggest burning political issue in the regulation of the internet in the last two years. Why is it that you have only started to provide such transparency in a limited number of jurisdictions? I would like to see it introduced in Ireland before Christmas.

Richard Allan: We have a team working on the deployment now. Again, the faster we can do it, the happier we will be, but I want to share some of the challenges around it. To those who are not as aware of the system, there are two key elements—three elements—to it. First, you can now go to any Facebook page and see the ads that are being run. That helps watchdogs. If you think a political party is running ads inappropriately, you can go to look and flag that. The second element is authorisation, which is where we check that, if you are an advertiser, you live in the country, and we ask you for some identity documents to check that you are who you say you are—know your customer. The third element is an ads archive. Now, if you run ads as an authorised advertiser, they go into the archive.

That is the system we are building. It is actually quite challenging, and doing the authorisations is difficult—understanding what constitutes the right kind of legal documentation in every country in the world, and doing that reliably. We have a roll-out plan and we are going as fast as we can. On the archives piece, for example, we found that one of the things that you need is to allow for who people are and who they are campaigning for, but there are no central databases. There is nothing for us to consult, so we built a team who are sitting there now—we are working on the UK implementation. We found that people try to game the system and put false information in, so we built up a team who are sitting there checking when someone puts information in whether they are really that political entity. This is hard, but we are doing it—we are committed to doing it but it is taking us some time.

Eamon Ryan: Mr Zuckerberg, in his presentations to I think the European Parliament and to Congress, indicated that the company Facebook would be applying the GDPR standards in its global business on the platform across the world. Is that the case?

Richard Allan: Yes. The system that we built we believe is GDPR compliant, and it is the system available everywhere.

Q4178  Jo Stevens: Lord Allan, can I take you back to November 2009, which I think is the date when Facebook users had what is called a central privacy page, and the Facebook text on that page says, “Control who can see your profile and personal information”. In November 2011, the US Federal Trade Commission, or FTC, made a complaint against Facebook on the basis that you allowed external app developers to access information about Facebook users’ personal profile and related information. Can you tell us when Facebook made changes to its own architecture to prevent developers from receiving that information, which actually circumvented Facebook users’ own privacy settings?

Richard Allan: The change in the OPI, as we have discussed, was 2014. I just want to be clear that my understanding of the FTC settlement—I was with the company through that period—is that the FTC objected to the idea that data may have been accessed from Facebook without consent and without permission. We were confident that the controls we implemented constituted consent and permission—others would contest that, but we believed we had controls in place that did that and that covered us for that period up to 2014.

Again, Mr Ryan mentioned this—why would anyone want this information? I just want to explain, and again you can disagree. The notion at the time was that something like a calendar with your friends’ birthdays on would be useful to you as a third party application. That required accessing the birthdays of your friends. The notion was that being able to have photo montages and collages and things with photos that your friends had shared with you and agreed to share with you would be helpful. Customs have changed over time, but I want to be clear that the idea behind this was not malicious; it was intended to add value.

Q4179  Jo Stevens: Am I right that if I had gone on Facebook and set my own bespoke and customised privacy settings, it didn’t really matter, because Facebook just overrode them?

Richard Allan: No, you had a setting that said, “I do not want my data to be accessed by applications that have been installed by my friends”. That setting was there and you had the choice to say no if you didn’t want your data to be accessed—

Q4180  Jo Stevens: How did that work? How was I told, or how was somebody using Facebook actually told that?

Richard Allan: It was within the privacy settings and there were a series of check-boxes you could check.

Q4181  Jo Stevens: Okay. You said just a few minutes ago that there is non-Facebook user data sitting on Facebook’s servers.

Richard Allan: Yes.

Q4182  Jo Stevens: What do you use it for?

Richard Allan: There are two main purposes. One is that if an individual has uploaded their contact list, it sits in their accounts; I have a list of the emails of my friends. When one of my friends joins the platform, the fact that we know that allows us to make the connection. And again, that is not unique to Facebook and is quite common practice across all services.

The second element is that our logs of IP addresses—internet addresses—are used mainly for security purposes. Mr Collins referred earlier to the idea of people pinging us from different IP addresses. Having a log, and understanding where people are coming to our service and trying to access data, helps us from a security point of view. That is the primary use of it.

Q4183  Jo Stevens: Do you make any money out of the non-Facebook user data that you hold?

Richard Allan: No. There’s no advertising; Facebook advertising is served to Facebook users.

Q4184  Jo Stevens: Okay. Can I go back to my starting questions again? After the FTC consent decree, which I think was in 2011, did Mark Zuckerberg know that Facebook continued to allow developers access to that information—after the agreement of the decree?

Richard Allan: Yes. He knew and all of us knew that the platform continued but, as I say, we did not see that decree as a prohibiting platform—platform is the technical word for this ability of developers to connect. As long as we had the correct controls in place, that was not seen as being anything that was inconsistent with the FTC consent order.

Q4185  Jo Stevens: And did you have correct controls in place?

Richard Allan: We believed we did, and again I think the FTC is investigating these issues and I am sure they will form their own view on whether or not that was correct. But we believed that our platform was entirely legal and proper, and consistent both with European data protection law, as exercised by the Irish Data Protection Commission, and with the FTC consent order.

Q4186  Jo Stevens: So you believed that it was subject to those laws? Was there any other regulation, or was that self-regulation you were applying in those circumstances?

Richard Allan: Those are the primary legal structures that govern us as a US company and as a company that is established in Europe—in Ireland. Unlike many other services, we accepted—again, I do not expect huge credit for this—that we had a duty to comply with European data protection law through that establishment in Ireland.

Jo Stevens: That is very good of you.

Richard Allan: I do not expect credit; it is just a technical point.

Q4187  Jo Stevens: I go back to my original point about people not understanding—or not knowing—that when they set their privacy settings, they were being overridden. How many of the 2.2 billion global users of Facebook have been affected by those privacy violations? I feel that it is a privacy violation—I did not know that my customised settings could be overridden.

Richard Allan: I do not believe that that is the case. The settings and the controls that you had would have governed the way that your data was used. You had controls on a whole range of things, from sharing with just third-party apps to a control for, “Hey, I don’t want any apps to be installed or to have access to any of my data”. So those controls were there. There are very valid questions about how well people understand the controls and whether they are too complex. One of the things that we have focused on recently is trying to simplify the controls.

Q4188  Jo Stevens: We are talking about 2009. We are nearly 10 years on. Now you are starting to simplify?

Richard Allan: There have been repeated attempts. The way that this tends to happen is, as the system becomes more complex, we add more and more controls. People feel it has got too complex, we simplify it, and then repeat. Over the years we have had various attempts to simplify the controls and make them really straightforward but, either because a regulator asked us to add an extra control or because the system needs an extra control, they become more complex over time, and we go through that cycle again. Our intention is that you should not be surprised by the way your data is used. Our intention is that it is clear and that you are not surprised. It is not a good outcome for us if you are.

Q4189  Clive Efford: Richard, you are a Member of the House of Lords and you said earlier on that you apologise for Mark Zuckerberg’s decision not to appear here. You took responsibility for that. How do you think that looks as a Member of Parliament?

Richard Allan: Not great. In my job, I have two roles. One role is as somebody who is very involved in the UK political scene and I respect Parliament and the people who have travelled here today for this hearing—I think it is important that we have this kind of engagement. I also have a role supporting my company as it tries to grapple with the issues we are talking about today and I understand, as we try to work out where senior officers of the company should be, that we need to balance this up. I am proud of the fact that we have answered thousands of questions—if not always to your satisfaction—and appeared as a team in front of many Committees around the world.

Q4190  Clive Efford: Let us move on. Can you give us an overview of your understanding of the founding philosophy of the Facebook platform?

Richard Allan: The core philosophy of the Facebook platform, and I hope I touched on this earlier, is that we have a very useful tool in this connectivity between over 2.2 billion people around the world—historically that was several hundred million—and that there are people who wish to build applications that add value for the users of that network with things that we are not going to do that they can. If the developers do those things, everybody wins out of that equation. That is really the core philosophy. We get more engagement on the platform—we are a business and we want engagement. We are not going to shy away from that. We provide a useful service but one where engagement matters. The developer does not have to build their own social graph, so they can get up and running much more quickly, and the user gets additional features.

Q4191  Clive Efford: What was the purpose of platform simplification PS12N?

Richard Allan: I do not recognise the term “PS12N”.

Q4192  Clive Efford: We were told that you are here to answer questions for Mark Zuckerberg, so how come you cannot answer that one?

Richard Allan: I do not recognise the term.

Q4193  Clive Efford: Platform simplification seems to have been a way of selecting which apps can access APIs on the Facebook platform. Can you tell me what a whitelisting agreement is?

Richard Allan: The way that the platform has evolved is that generally speaking, it is an open platform. People can build applications. As long as they meet the terms, they can access it. Some services are particularly valuable, and you can think of them—the other major name brands in internet services that might want to integrate with ours that provide music or video. I think it is right that we be more open to access from services that are significant name brands, adding a lot of value to our platform. Whitelisting, as I understand it, means that there are services that add particular value and have particular status.

Q4194  Clive Efford: Do you have any in-depth understanding of the criteria by which apps are given access to APIs under a whitelisting agreement?

Richard Allan: Again, I want to be clear: this is normal commercial activity that would take place on any internet service. People show up, meet your terms and conditions, sign up and access your platform. Then there are large entities, which are typically other large, reputable companies—the kinds of companies you might use yourselves. When they come to you, you have a conversation, and there may be particular terms for your arrangement with those companies. This is not unique to Facebook and not unusual, and the intent is to add value for the users. These companies provide services that are more valuable to our users than the run-of-the-mill services, and also have more infrastructure. For example, on issues such as data protection, they are companies that have significant data protection operations themselves and can be trusted to be more reliable with the data they hold.

Q4195  Clive Efford: Is one of the criteria for that sort of whitelisting agreement the ability to buy large quantities of mobile advertising through Facebook?

Richard Allan: No.

Q4196  Clive Efford: We have seen evidence that suggests that is the case, but as far as you are concerned it is not the case?

Richard Allan: No.

Q4197  Clive Efford: So when PS12N came into operation on 30 April 2015, no apps were shut down on the basis that they could not pay a large sum of money for mobile advertising?

Richard Allan: No. I will make sure we come back to you on that, but my understanding is that the platform continues exactly as it always has. Developers should be adding value to the platform—that is the primary criterion—and meeting all of our terms, including those around data. They may choose to buy advertising; they may choose not to buy advertising. That is their decision.

Q4198  Clive Efford: Has Facebook ever targeted a tech developer on its platform to close down its operations so that Facebook itself can move into that area and make money?

Richard Allan: No, not that I am aware of.

Q4199  Clive Efford: These decisions are made at a level within Facebook that it appears you do not operate at, which is why we need to speak to Mr Zuckerberg to get the answers to these questions. He seems to be the source of these decisions within Facebook. Do you know what the effect of privatising APIs in 2015 was?

Richard Allan: I want to be clear about what was happening at that stage. A transition from desktop to mobile was going on. I believe you are referring to documents that were obtained, as Mr Collins described earlier, which are partial documents selected deliberately to be hostile to us.

Q4200  Clive Efford: So you know what is in those documents?

Richard Allan: I am aware of the documents, yes. Again, I want to hopefully help the Committee by setting some context. In 2014 and 2015, there was a transition from desktop to mobile. That meant that every business—ours and everyone else’s—had to think about what the new business models were for this mobile environment. In the old desktop environment, if people remember it, there were games like “FarmVille” that could be installed in your web browser on Facebook, and we had a business model for working with those kinds of developers—Zynga was the name of the “FarmVille” developer.

In 2014-15, people started using mobile, and now the control of a lot of the monetisation was in the hands of the app stores, which decided how you could or could not interact with them. All companies, including Facebook, discussed at that time what the new business models would look like. I suspect you may have some partial records of some discussions of business models, but I do not think it is fair to draw definitive conclusions about what we did. What we did was carry on a model within which developers could access Facebook if they met all of our terms and conditions, and they were free to choose whether or not to buy our ads.

Q4201  Paul Farrelly: Before we move on and I follow up with a few questions along Clive’s lines, regarding Damian’s earlier question about the email from the engineer and the Russians and the 3 billion data points, did you undertake to provide us with an answer?

Richard Allan: I will provide an answer.

Q4202  Paul Farrelly: As to whether that was reported to any external agency or authority?

Richard Allan: I will look into the claim that was made and provide you with information about what we believe around that claim.

Q4203  Paul Farrelly: It has been a bit busy here with Brexit, so I have not been through all these documents either, by this so-called hostile litigant. Just so that I can understand, this firm—Six4Three—what is their beef with you? What were they doing and why did you shut them out?

Richard Allan: The irony in the context of the current discussion is that their beef originated from us precisely making the change that I think you all want us to make. Their application, as I understand it, depended on access to friends’ data. When we changed the API—the method of access—they lost access to the friends’ data so therefore their application no longer worked as they had intended and they began a succession of law suits against us. In trying to restore the access, they want us to reverse this, so that people can access friends’ data again.

Q4204  Paul Farrelly: What was their app doing?

Richard Allan: I was not a user. I understood it was installed 4,500 times and it promised to help you find photos of your friends wearing bikinis.

Q4205  Paul Farrelly: I saw that someone from Facebook called that “sleazy”.

Richard Allan: No, that was descriptions that were in the press at the time.

Q4206  Paul Farrelly: I find the descriptions ironic because of the way Facebook started at university.

Richard Allan: I don’t think it involved bikini photos, but I get your point.

Q4207  Paul Farrelly: I understand why you might want to pursue anti-SLAPP orders and protective orders in court so I am not going into the whys and wherefores, but it begs the question of what Facebook has to hide.

Richard Allan: If you want to have a discussion with us, as I have tried to do today, about the way in which decisions were made around platform, I think we can have that discussion without leaking all the internal conversations we have going backwards and forwards. In the same way in your businesses you will come up with a new political policy, there will be a lot of heated debate behind that and people will say things.

Q4208  Paul Farrelly: It is a relevant question to ask what Facebook has to hide.

Richard Allan: My answer is I don’t think that we have a duty to put into the public domain all our internal discussions around an issue. It is appropriate for us to respond on how we settle that issue—the decision we came to—but I don’t think it is reasonable that we should share, any more than you should, internal discussions that can be quite robust.

Q4209  Paul Farrelly: So you have nothing to hide?

Richard Allan: In terms of what we did, no. In terms of are we comfortable airing all those internal conversations, all the robust comments that people may have made in those conversations and have those treated as the company’s official position, I don’t think that is fair.

Q4210  Paul Farrelly: I have read a summary document that has been circulated and which is public information because it has been filed in court. It refers to the background to the FTC consent order in 2012. To what extent has Facebook complied with that order?

Richard Allan: We believe we have complied with it. We had to take a whole series of actions but, as you may be aware, the FTC has indicated that it wants to have another look on the back of the same issues that we have been talking about here.

Q4211  Paul Farrelly: As far as you are concerned, you fully complied with it?

Richard Allan: As far as I am concerned, we believe we fully complied with it.

Q4212  Paul Farrelly: From something I have read—you may call this partial, from a hostile litigant—the allegations against Facebook go beyond tough and sharp business practice to maximise revenues, cherry-pick purchases—Onavo, WhatsApp, Instagram—beyond interstellar data gathering, but really extend to crushing competition. At every point in this summary, I wrote down four letters in the margin—RICO, which you will be aware of: the Racketeer Influenced and Corrupt Organisations Act. Is that a fair thought to have in my mind?

Richard Allan: Not at all. There are two ways you can characterise our company. The litigant is clearly characterising it one way and you have reacted similarly. The other way to characterise the company is as a group of people who I have worked with closely over many years who want to build a successful business. They want the business to do good in the world. At the same time, they want it to be successful and grow. I don’t think there is anything fundamentally conspiratorial about that. They make the same kind of decisions that any business would make in order to be successful in what is a highly competitive space. You know the internet space—

Q4213  Paul Farrelly: I didn’t use the word conspiratorial. I was just going to use the word conspiracy because RICO covers that—conspiracy to damage someone’s business or property. Has anyone approached Facebook, to your knowledge, about filing a RICO suit against it?

Richard Allan: Not that I am aware.

Q4214  Paul Farrelly: Has Facebook ever prepared the ground and taken advice on how it would defend a RICO lawsuit?

Richard Allan: Not that I am aware.

Q4215  Paul Farrelly: Perhaps if it has, you could let us know. I shall follow up with one final set of questions on the Mainstream Network. You must have seen the evidence that was given to us about ads being run by the anonymous Mainstream Network. I have just tried to see whether they have any ads running now, and it does not appear that they have, so what have you done?

Richard Allan: Again, for those who are not so close to it, this is an entity that created a page on Facebook and ran ads described as, “Chuck Chequers”, which in our British parlance means, “Reject the Prime Minister’s Brexit deal”. As soon as there was press attention on that organisation, precisely because it is now transparent where the ads are being sourced from, they stopped running ads.

Our understanding, unless anyone here wants to correct me, is that there is nothing today illegal in the United Kingdom about an organisation running those kinds of ads. It would be very interesting—I know the Committee’s recommendations touch on this—to understand whether or not that should change in the future. As of this week, any organisation that wants to run ads like that will have to authorise. We will collect their identifying information. They will have to put on an accurate disclaimer. Their ads will go in the archive. The deterrent effect seems to be working, in that this organisation, once there was press attention and once we brought the new tools in, have stopped their advertising activity.

Q4216  Paul Farrelly: To have a page like that, what details do they have to file with you?

Richard Allan: To set up a basic Facebook page, they need to have a valid Facebook account. If we believe it is a fake account, we will shut it down and shut the page down. But if it is you, and it is your legitimate Facebook account—

Q4217  Paul Farrelly: Do you know who is behind the account?

Richard Allan: I don’t personally know who is behind it.

Q4218  Paul Farrelly: No—does Facebook know who is behind it?

Richard Allan: We would know whose Facebook account it was, and if there were an investigation by an entity that can legally require information, we would provide information in line with the normal procedures.

Q4219  Paul Farrelly: My final question: in the interests of transparency, because an election could happen at any time here in the UK, when you go away, could you tell us who is behind that account, and if you cannot tell us, write to us and tell us why?

Richard Allan: Yes; I will write to you either way.

Q4220  Chair: Hildegarde Naughton.

Hildegarde Naughton: Earlier this year, we held a referendum on abortion in Ireland. Both Facebook and Google stopped all advertising in relation to that referendum. That action was very much welcomed by the Irish Government at the time. My question is, is not that action an admission by your platform that it is used to disseminate fake news and to influence elections?

Richard Allan: That was a very specific response. Again, I was part of the decision-making process and I am glad that you welcome the ultimate decision. What we saw was that there is a very good non-governmental organisation, the Transparent Referendum Initiative, who were flagging to us clear evidence that there were significant amounts of money being spent by organisations on both sides of that debate from people outside of Ireland.

We then needed to make a decision. Irish law, as I understand it, is silent on the matter. It says that foreigners must not fund Irish political parties, but it doesn’t say anything about them spending their own money projecting stuff in—I assume, reasonably, because when the law was written there was local TV and radio, so it was all controllable and our systems did not exist. Because of that silence, we needed to interpret what we thought the intent or spirit of Irish law was, and we took the decision that projecting material in from outside Ireland was something we would not allow.

We are now very interested in the debate on C-76, the new law proposal in Canada, which has this explicit provision on preventing outside interference, and I think you also have legislation in front of you. That was the rationale for the decision. We tried to understand what the spirit of Irish law was, having to make a pretty binary yes/no decision. We are not comfortable; we would much rather that there is a legal framework that we can stick to where you have made those decisions.

Hildegarde Naughton: The threat was there. I know you cannot speak for Google, but it was Google and Facebook that made that decision during the referendum campaign to stop all advertising. Would it be fair to say that there was a threat that there would be fake news disseminated through your platform, and that there would be interference in the electoral process?

Richard Allan: For us, this was not a fake news question, as much as a straightforward political campaigning—

Hildegarde Naughton: There was potential for that to happen.

Richard Allan: It was election interference, as in someone was spending that—

Hildegarde Naughton: Yes—election interference. That is why the ads—

Richard Allan: A lot of the ads were “Vote yes” or “Vote no”. They were not fake news as such; they were just a straightforward political view.

Hildegarde Naughton: But the concern was that there was political interference.

Richard Allan: Yes.

Hildegarde Naughton: As I say, the Irish Government very much welcomed that. You made reference to legislation. We have two private Members’ Bills going through the Irish Parliament at the moment, and both Facebook and Google have come before our communications Committee in the Oireachtas and engaged constructively with our Committee in relation to that. One is a private Member’s Bill, the Digital Safety Commissioner Bill 2017, which would provide a take-down power to a regulator. The second Bill that we are looking at is the Online Advertising and Social Media (Transparency) Bill 2017, which would ensure that when you were viewing an ad, you would know who was paying for it. In light of the fake news and the data breaches that your company has been involved in over the past two years, do you accept that Facebook needs to be regulated through legislation like this?

Richard Allan: Yes. We very much welcome the updating of election law. Again, it is not to be critical, but many of the laws governing political communication were drafted pre-internet. They have things like bans or restrictions on certain forms of campaigning, but they are behind the times. We have seen some very interesting developments. In Brazil, I understand that the responsibility is put on the political actor. They can only use a service if it has the transparency tools. If they don’t do that, the political actor is in trouble. I understand that Canada is doing something similar around disclaimers and is also putting responsibility on the platform. You set the rules for your elections—it is not us—but to the extent that this is all clarified and we have a simple playbook to work to, that would be extraordinarily helpful.

Hildegarde Naughton: Thank you for that. The root of this—this is our concern—is that there is a lack of regulation of social media right across the board. It is not just Facebook, just to put that on the record. Our colleague is not here today, but Australia has its own Digital Safety Commissioner, and my understanding is that that process is working very well because of the engagement with social media platforms. I think I am correct in saying that the Digital Safety Commissioner in Australia has not needed to actually fine social media platforms because of the engagement process. I think people are listening and watching and citizens across the world, who we all represent, want to hear Facebook’s view on regulation. Are you serious about regulation on a global level? We are all here internationally representing countries across the globe. We need to regulate on a global level. What is your view on that?

Richard Allan: Absolutely. Again, to be very clear, my view is that the best outcome that we get for people—our users and your constituents, who are the same people—is where we are working together on this. I have tens of thousands of colleagues who are deeply committed to trying to protect the safety of our users. These are people who get up every morning and worry about what is happening on the platform and try to prevent harm. It is now tens of thousands. It used to be too few, and we accept that.

The best way that we can ensure safety is where we are able to be very open with Government about the problems we are seeing and have a very informed debate, and where we can work together on what the solutions are. Some of the solutions are on our platform. We can throw people off and we can collect information—there are certain things that we can do. Some of them absolutely need regulation. If somebody is a threat to children, us throwing them off Facebook is not enough; they will go somewhere else. We need then to work with the authorities to prosecute them. The UK Government has an intended White Paper for next year that we will work with them on.

As you say, we are working with you and with other Governments. The team I run does a lot of this work. We have recently announced that we will work with the French Government and bring regulators in and try to develop new regulatory models, whereas ideally we get experts in areas such as child safety, we get experts in technology, like us, and we get experts in the law and people who represent the users from the political world working together. That is where we think we will get the best outcomes.

Hildegarde Naughton: I think it is human nature where there is no regulation, no oversight and no watchdog, be it in business or an organisation, you need to have that oversight. Would you agree, in the case of Facebook and other social media platforms, that you also need that regulation on a global level?

Richard Allan: Absolutely, and that is our expectation: that we should be accountable for you, we should tell you what we’ve done, and if you’re unhappy you should have the power to take sanctions against us. I completely accept that principle.

Hildegarde Naughton: We have agreed here maybe to look at consulting how we would set out a set of rules at an international level, be it through the United Nations or at OECD level, and that would be very positive. I hope we can continue this work, Chair; I will take this opportunity to thank both Canada and the UK for coming together to co-chair this meeting. I think this is the start of an important process, and that we can all work together to ensure that there is momentum now for regulating social media and work together on that.

Q4221  Chair: Thank you; that is a perfect segue to Nathaniel Erskine-Smith.

Nathaniel Erskine-Smith: Thank you very much. The chair beside you is noticeably empty, Mr Allan, so I want to start with a statement from Mark Zuckerberg. He says, “We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry.” Do you agree with that statement? Do you think Mark Zuckerberg was genuine in making it?

Richard Allan: Yes.

Nathaniel Erskine-Smith: So just not sorry enough to appear himself before nine Parliaments. Taking a broader view of responsibility, does that include the failure to sufficiently prevent election interference, including in 2016? If you were to take a broader view of responsibility, that would have meant taking more steps to prevent election interference. Is that fair?

Richard Allan: Yes.

Nathaniel Erskine-Smith: Okay. On the failure to sufficiently prevent the spread of hate, including in the Rohingya genocide in Myanmar, if you are to take a broader view of responsibility, Facebook would have taken greater action to prevent hateful content on its platform. Is that fair?

Richard Allan: Yes, and I think we have acknowledged those issues publicly.

Nathaniel Erskine-Smith: So on the failure to prevent the creation and approval of problematic targeting audiences—for example, 168,000 people under the category of white genocide conspiracy theories—if you were to take a broader view of responsibility, you would prevent the creation of such audiences. Is that fair?

Richard Allan: Yes, and we do today.

Nathaniel Erskine-Smith: Okay, so was Facebook malicious in any of this conduct?

Richard Allan: Again, that is not my experience. The people I work with are sincere about building a service that they believe offers value to people, and we believe that in most cases we do that. We have recognised that we are doing something new, so we hit new problems.

Nathaniel Erskine-Smith: So the failure to take that broader view of responsibility was not malicious, is my point—it was negligent.

Richard Allan: Again, there were things that we missed, that we were either not sufficiently focused on or too slow to react to. On the examples you have cited, we have said that. Take the specific example of the Rohingya and the situation in Myanmar: we should have hired more Burmese speakers to review Burmese-language content earlier. We have now done that.

Nathaniel Erskine-Smith: It sounds like you would agree that it is not malicious and it is negligent. I want to briefly touch on reciprocity; this is new information for me, but I reviewed it recently. Did Facebook ever make the purchase of mobile ads a condition for developers’ continued access to friend data?

Richard Allan: No.

Nathaniel Erskine-Smith: Okay. Did Facebook treat developers’ APIs differently if they had purchased mobile ad product?

Richard Allan: I am not aware of that; again, that is a very specific question that I would need to look into.

Nathaniel Erskine-Smith: Would Mark Zuckerberg have an answer to that question?

Richard Allan: I am not sure he would; I am not sure anyone would. I need to understand the very specific—

Ian C. Lucas: I think he would.

Nathaniel Erskine-Smith: You probably should answer that question to this Committee at a future date. Perhaps Mark Zuckerberg could. Did Facebook ever tell app developers that they needed to start buying Facebook’s mobile ad products or their access to friend data would be cut off?

Richard Allan: Again, taking a step back, you are quoting, I know, from the emails that have been circulated—

Nathaniel Erskine-Smith: I am not.

Richard Allan—or you are discussing information that was shared within those emails.

Nathaniel Erskine-Smith: I am not.

Q4222  Chair: There has been no disclosure. To be really clear, for the record, there has been no disclosure outside the DCMS Committee of any of the information contained in the emails.

Nathaniel Erskine-Smith: I will again ask: did Facebook ever tell app developers they needed to start buying Facebook’s mobile ad products or their access to friend data would be cut off?

Richard Allan: No.

Nathaniel Erskine-Smith: Did Mark Zuckerberg ever cancel an announcement to implement API restrictions, including friend data being publicly removed? Is that something that Mark Zuckerberg ever did?

Richard Allan: I am not aware; I would need to check.

Nathaniel Erskine-Smith: Mark Zuckerberg would be able to answer that question, though.

Richard Allan: The announcement of a cancellation of a development—do you have a timeframe?

Nathaniel Erskine-Smith: Well, it would presumably have been before 2014, I would expect. I don’t know that answer; you don’t know that answer. Mark Zuckerberg would know that answer.

Richard Allan: Not if it was before 2014. I think he, like me, would have to check.

Nathaniel Erskine-Smith: I want to get this into a Canadian context. In 2009, the Office of the Privacy Commissioner was part of an investigation and a settlement. They flagged for Facebook the question of unrestricted sharing of friend data with app developers. Are you aware of that?

Richard Allan: Yes.

Nathaniel Erskine-Smith: Okay. So in 2009 it was flagged as an issue, but it was not addressed until 2014. Why the five-year gap?

Richard Allan: Again, I hope I was clear in response to an earlier question that our understanding was that this was not illegal—

Nathaniel Erskine-Smith: Okay, great—I am glad you have said that, because the Canadian law requires meaningful consent. We don’t know each other, Mr Allan, but let’s assume that we are friends on Facebook. Is it your view that when you agree to use an application, the fact that I have failed to go into my privacy settings and check a box means that I have given meaningful consent to Facebook to allow you to share that information? Is it your view of the law that that is meaningful consent?

Richard Allan: Yes. Again, just to clarify, when you signed up for Facebook—

Nathaniel Erskine-Smith: Not in Canada!

Richard Allan: Again, to be clear, these issues have been contested. They were contested then and they will be contested now. Our view is that when you signed up for Facebook, we were sufficiently clear that part of the package of signing up for Facebook was that it is a social experience—the clue is in the title, “social network”. As part of the social experience, when you share information with your friends, your friends may share it on. Remember, as soon as I share a photo with you—we are friends now—you can take that photo and do what you like with it.

Nathaniel Erskine-Smith: And why would I object to your using Farmville and all my information being shared with the Farmville app developer? Why would I object to that?

Richard Allan: It’s not all information.

Nathaniel Erskine-Smith: Let’s talk about Cambridge Analytica. In the Canadian context, 270 people used the “This Is Your Digital Life” app.

Richard Allan: 270,000?

Nathaniel Erskine-Smith: No, 270 people.

Richard Allan: In Canada?

Nathaniel Erskine-Smith: In Canada.

Richard Allan: Right.

Nathaniel Erskine-Smith: And 620,000 had their information shared with the developer. Is that acceptable to you?

Richard Allan: The sharing on—the use of that data—was wrong. A benign application like the one I have described, where I sign up for it and pull in all my friends’ birthdays into my birthday calendar, would be okay. That was the intended use.

Nathaniel Erskine-Smith: Okay, but it was every app developer. It wasn’t this specific birthday app or that specific calendar app; it was every application. It was a rudimentary application to make a cow jump, it was a Farmville application—it was Cambridge Analytica. The idea that you can point to a calendar app and say, “Therefore we share all information and all friend data with every app developer”—how is that acceptable to you?

Richard Allan: Not all data.

Nathaniel Erskine-Smith: I said “all friend data”.

Richard Allan: Not all data of all friends. Applications needed to ask for permission. You can argue that this was not sufficiently well enforced, and we can have that discussion, but we said very clearly in our terms, “If you are an app that is running a birthday calendar, you can ask for birthdays. You can’t ask for photos and all this irrelevant material if that is not the intent of your app.” GSR went outside that framework.

Nathaniel Erskine-Smith:This has been a tough time at FacebookWhile these are hard problems we are working hard to ensure that people find our products useful and that we protect our community from bad actors.” That is a recent statement from Facebook. Has it occurred to you that in relation to privacy, Facebook might be becoming one of these bad actors?

Richard Allan: I don’t believe we are.

Nathaniel Erskine-Smith: Okay. With respect to accountability and legal liability, you have supported the Stop Enabling Sex Traffickers Act. Should Facebook be accountable for all illegal content that is produced on their platform and not taken down within a reasonable period of time?

Richard Allan: I don’t think that the internet will be well served by removing all intermediary liability protections, but I do accept, as you saw with SESTA, that the idea that we are exempt from everything is also out of date. To go back to Ms Naughton’s question, we need to get to the right balance on where our responsibilities lie.

Nathaniel Erskine-Smith: The last thing I will say goes back to the idea of trust. We don’t have Mr Zuckerberg here today, which is incredibly unfortunate. I think it speaks to a failure to account for the loss of trust across the globe with respect to Facebook and its users. It was not until recently that you started to notify Canadian users that their information was shared in a Cambridge Analytica context. A sense of corporate social responsibility, particularly in the light of the immense power and profit of Facebook, has been as empty as the chair beside you. Thanks very much, Mr Allan.

Q4223  Rebecca Pow: You are welcome to our Committee, Lord Allan. I want to pick up on some quite general points. A recent article in The New York Times said that Facebook “ignored warning signs and then sought to conceal them from public view” and mentioned allegations that Facebook had had power to exploit and disrupt elections, broadcast propaganda and inspire hate campaigns. What is your view about that? Do you agree with those allegations? They are quite outspoken.

Richard Allan: My own view is that I thought the article, which obviously I read, mischaracterised discussions and people that I have engaged with over the last two years. We are careful about when we release public information. We try and understand how it is going to land. In particular, in the context of some of the issues that they referred to in that article, there is no point in publishing a piece of security information if everyone is going to allege that it is partisan and dismiss it. We do have robust internal discussions about the release of information, but actually I think we are ahead of the game in terms of the amount of information that Facebook publishes, which I think is actually significant compared to what you know about what is going on on the internet generally.

Q4224  Rebecca Pow: So regarding the fact that personal data from 87 million people was harvested and then potentially used in the Trump campaign, you are saying that you didn’t act on that, you didn’t react—you think none of that is relevant.

Richard Allan: No, and we have acted on it. They were the original set of accusations, which I think we discussed earlier and were reported in the Guardian, whether actually the Ted Cruz campaign, I believe it was, had contracted with Cambridge Analytica, who had contracted with the company GSR. That was the original set of allegations. It was in the public domain. We took steps, as we have explained to this Committee, to try to ensure that any data that had been obtained had been destroyed. We received those assurances. I understand that people are not satisfied with the fact that we only got written assurances, but we thought we had done the right thing around that incident.

Q4225  Rebecca Pow: I know that members of the public, when they are watching or listening to this very complicated saga that is ensuing, would be forgiven for thinking that Facebook’s main aim was really all about expanding its business and the value of its shares, and that potentially that has been put above public safety.

Richard Allan: I would contest that characterisation. I do not want to hide the fact that we are a business. We offer an amazing service to millions of people around the world, which we are able to do for free because we have this ad-funded model. We are doing that with a mission. We have a purpose. We want our service to be good, we want it to be useful and we want it to be safe. As we have discussed repeatedly today, we don’t always get that right, but our objectives are very clear. We want a service that is useful and engaging, and that lots of people want to take up, and we want it to be successful.

Q4226  Rebecca Pow: I was interested to hear you say earlier, Lord Allan, that you said it was a win-win-win all round for everybody. I put it to you that lots of people might think now, hearing all the evidence that is coming out, that in many cases it is a lose-lose-lose.

Richard Allan: I would not characterise it that way. We are talking about some very specific, very important things that have gone wrong. I think that is right. We are not focused, because it is not so interesting for the purposes of this hearing, on the things that go right. The things that go right are millions of people connecting with you as politicians, with small businesses in their area, with brands that they love or with content that they love. They do that every day, day in and day out, and I think the value is there. I think the win-win-win is largely there. There are some specific issues we need to deal with. Some we need to deal with ourselves, and some will require regulation.

Q4227  Rebecca Pow: It is a great pity that Mr Zuckerberg didn’t turn up, but on that note that you are making there, I believe that in a video conference to all his staff recently Mr Zuckerberg said that much of the criticism of Facebook over the last 18 months, relating to election security, content moderation and disinformation, had in fact been fair and important. Would you agree with that?

Richard Allan: Absolutely, yes.

Q4228  Rebecca Pow: Perfect. He sounds quite confused, because in the next part of an article in The Times he said that a lot of the criticism had been unfair and untrue.

Richard Allan: The criticism of individuals. Again, to separate that out, there are things that we need to address. Quite rightly, we are having conversations like this and many other conversations around the world, around the specific issues we need to address. We are building our defences against election interference and we are working on issues like false news, which we have managed to dramatically reduce. At the same time as working on those issues, we are seeing, I think, some quite personal attacks on the staff who work at the company, which we don’t recognise. I work with these people. I don’t recognise the mischaracterisation of the individuals I work with, who I don’t believe are involved in some vast conspiracy, but are actually decent people like yourselves, trying to do a decent job.

Q4229  Rebecca Pow: Is there a feeling among Facebook and the staff that trust in Facebook has plummeted?

Richard Allan: Absolutely. It is a major concern and a major topic of conversation. We recognise both through our own actions and external events that we are not in a good place in terms of trust.

Q4230  Rebecca Pow: The Information Commission came out with a fine for Facebook because it had deep concerns about what is going on there—saying that you put UK members at risk and you have not done enough to address all the allegations. Now, however, I believe that Facebook is challenging that fine. On what grounds are you challenging that?

Richard Allan: What we are challenging is the judgments. Again, I think you have a session later to explain the rationale. We have tried to do that in public. It is not the amount of the fine that is the issue here. The issue is that there is language in the judgments that goes to the heart of the thing we have been discussing, which is: how do you assign responsibility between the first party and the third party, when you have one of these developer relationships? We think that is very important to test, not just for us but for the whole sector. The appeal is the route—and the Information Commissioner herself has said that it is an appropriate route—for us to have that legal question looked at in some detail.

Again, just to understand the concerns, some of the language in there suggests that if I have an email or a message that has been sent to me by somebody and I share it with a third party, that may be illegal and cause problems back to the first party. That is the kind of question that it is really important we get right, because it will affect the entire ecosystem of internet applications.

Q4231  Rebecca Pow: On the point that my Irish colleague made, you said you were pleased that we were having an open discussion about regulation. Are you really pleased, or does that appeal not demonstrate that you are trying to get out of your responsibilities?

Richard Allan: I am pleased, personally, and the company is very much engaged, all the way up to our CEO—he has spoken about this in public—on the idea of getting the right kind of regulation so that we can stop being in this confrontational mode. It doesn’t serve us or our users well. Let us try to get to the right place, where you agree that we are doing a good enough job and you have powers to hold us to account if we are not, and we understand what the job is that we need to do. That is on the regulation piece.

Chair: We can all hope.

Q4232  Brendan O'Hara: At the start of this session, my colleague from Canada, Mr Angus, expressed our collective disappointment that Mr Zuckerberg chose not to appear before this Committee. You were sent in his stead. Were you sent because, in the whole of the Facebook empire, you were believed to be the person best placed to answer this Committee’s questions, or were you sent as the person best placed to defend the company’s position?

Richard Allan: I believe I am here because I have been at Facebook since 2009. Most of the issues that you want to address are issues that I have first-hand knowledge of, and therefore I can answer your questions.

Q4233  Brendan O'Hara: You have not answered the question. Were you sent because you, in the entire Facebook empire, are the person best placed to answer all of these questions, or were you sent because you are best placed to defend the company?

Richard Allan: I am sent because I can answer your questions. We tried—and as you will remember, we sent Mike Schroepfer, our chief technical officer.

Q4234  Brendan O'Hara: Yes, I remember that well.

Richard Allan: He spent five hours here. We thought he had the information you needed; you were not happy. Issues related to elections and election interference are things that I own.

Q4235  Brendan O'Hara: Who decided that you were the best-placed person in the whole of Facebook to come before this international Committee?

Richard Allan: I volunteered myself.

Q4236  Brendan O'Hara: So you volunteered? So Facebook never sat down and said, “This is a hugely important multinational representing hundreds of millions of people. Who is best placed to go and explain our position?” Nobody did that. You put your hand up and said, “I will do it.”

Richard Allan: We had that discussion, and the discussion was, “We have sent Mike Schroepfer. They were not happy with Mike Schroepfer. Who else is in the best place to explain what it is that we do, particularly in relation to elections? That is the subject that we expect to be covered here.” I said, “I believe I have the knowledge that this group needs.”

Q4237  Brendan O'Hara: Let me press you on this; when you, I presume—Facebook—sat and looked at the transcript of the Mike Schroepfer interview session and also the session we had in Washington, and the transcript, and the number of times where Mike Schroepfer said, “I’m sorry, I will have to write to you on that,” they thought you would be the best person to come and clear up all of those unanswered questions.

Richard Allan: To be precise, both for the issues that you want to raise in the UK Committee—but also I now work on election issues globally—we looked at the range of different Parliaments coming here. I have been following the fake news law in Singapore, I have been following C-76 in Canada and I have been following the Lawless legislation in Ireland. This is the stuff I work on. Our working assumption was that that was what you would want to discuss.

Q4238  Brendan O'Hara: Okay, so having read the transcripts of all the previous sessions we have had with Facebook, you volunteered to come here. Having read those transcripts, you must have been aware of the number of times Facebook has said to this Committee, “Let me write to you on that.”

Richard Allan: Yes.

Q4239  Brendan O'Hara: You have been here for almost exactly 90 minutes. How many times have you said “Let me write to you on that?”

Chair: He will have to write to us afterwards with the answer to that question.

Richard Allan: I think it is three or four, actually. I don’t think it is 100.

Q4240  Brendan O'Hara: You take my point. We are desperately trying to get to the nub of this. There is a certain irony, is there not, in a global online giant like yourselves resorting to writing to people constantly to try to get the answers? So let me ask you, what light do you think you have shone on the issues that we have raised today and previously that has provided this Committee with greater clarity and more understanding than Mr Zuckerberg could have done?

Richard Allan: I think that is for you to judge.

Q4241  Brendan O'Hara: No, I am asking you.

Richard Allan: I believe I have given you insights into the way we think about regulation, the way we think about election interference, decision making—

Q4242  Chair: I am sorry to interrupt, but I think we are going to have to move on at this point.

Bob Zimmer: I would first of all like to thank Mr Collins for establishing this Committee for this purpose. My first question—and really my main concern with this whole study—has been the foreign influence in election campaigns. I am, I guess, deeply troubled by Facebook’s response to that. When I asked Jim Balsillie in our committee, “If we don't change our laws in Canada to deal with surveillance capitalism, is our democracy at risk?” he said, “Without a doubt.” What do you think?

Richard Allan: We all work in the political sphere. What we want are free and fair elections. What we don’t want is people cheating—working around the rules. We all know political campaigns go up to the line, and we try to stop them going over the line. I think there are a number of vectors that are problematic. One is the foreign interference. One is that now people can, as we discussed in the context of Ireland, project their views into a country more easily than before. We need to acknowledge that and do something about it. The other aspect is the domestic side of things. Again, we talked about Mainstream Network earlier, and part of the point of our transparency tools is to try to stop the people who are perhaps most motivated, which is people inside your country trying to do dirty tricks campaigns. Those are the two areas that I think we need to focus on, and we are trying to contribute our piece with the technology that gives you the transparency to spot both of those forms of inappropriate activity.

Eamon Ryan: Is there not also a threat to politics when the trust of people is lost and their data is used in an underhand way?

Q4243  Chair: I think we will park Mr Ryan’s question and then continue.

Bob Zimmer: Did you want to answer first?

Richard Allan: Yes, and I think the work that has been done in the United Kingdom is long overdue. Again, I have worked in political campaigns. I will be honest; I think there is a sort of carelessness and a bit of a “Well, if you win it is okay” attitude in political campaigning that I have seen and experienced.

Eamon Ryan: I mean in social media companies it is used in an underhand way.

Q4244  Chair: I think Bob was happy for Lord Allan to finish the question, but I don’t think he had ceded his time.

Bob Zimmer: Yes, I will keep going. I don’t have my gavel here from Canada.

I am not confident, especially from previous statements by Mr Zuckerberg. I will read something that he said a few years ago, because he seems to deflect and really say, “Well, look, it’s not that big a deal that we are involved, or our platform is being used, in election campaigns.” This is exactly what he said: “Personally I think the idea that fake news on Facebook—of which it’s a very small amount of the content—influenced the election in any way is a pretty crazy idea.” Do you agree with Mr Zuckerberg?

Richard Allan: He said that that was not the right way to frame the discussion—

Bob Zimmer: That is what he said, so I am asking for your response. Do you agree with Mr Zuckerberg and his statement?

Richard Allan: I am saying that it could be more elegantly said, but the point is that if you have domestic organisations and political parties spending this much, and you have evidence of inappropriate activity that is spending this much, if you are looking for why the election went one way or the other, we should start with the big expenditure. That doesn’t mean that the small expenditure is not a problem or that we should not look at it, but we should also not lose perspective if we are trying to understand why an election went one way or another. That was not elegantly said—

Bob Zimmer: I’m actually not sure what you just said.

Richard Allan: I am saying that in an election campaign a huge amount of legitimate activity is carried out by all the parties, depending on the country, and millions are potentially spent on ads online, or on TV—a huge amount of activity. We did spot activity that was wrong and should not have happened. There was political activity directed from Russia. If you say to me, “Was the election won because of the mass of activity here, or because of a small amount of activity here?”—it is reasonable to say that if we think there is a problem, we should start by looking at campaign spending generally.

Bob Zimmer: Mr Allan, I think the problem is that you are again deflecting. We saw in the New York Times—this has been referred to by another member of the Committee—about delay, deny and deflect, and this seems to be more of the same from what you are saying today.

Richard Allan: No, I am acknowledging that they are both problems. You asked me why my CEO made that statement, and I am trying to describe to you the thinking that I think went behind it. That is not meant to divert attention from the fact that all those are problems that need to be addressed.

Bob Zimmer: I guess what I am hoping—I put it down as apology 2.0, although it is really 0.20. Here we are again, hearing another apology from Facebook: “Look, trust us, there are regulators and so on, but we really do not have that much influence in the global scheme of things.” In this room we represent more than 400 million people, and to not have your CEO sitting in that chair is an offence to all of us, and to our citizens as well. I have another statement that is a quote from the article “Delay, Deny and Deflect”, which states, “But as evidence accumulated that Facebook's power could also be exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe, Mr Zuckerberg and Ms Sandberg stumbled. Bent on growth, the pair ignored warning signs and then sought to conceal them from public view.” That is deeply troubling, I would say. “At critical moments over the last three years, they were distracted by personal projects, and passed off security and policy decisions to subordinates,”—again, I would say that you are a subordinate before us today—“according to current and former executives.” Do you agree with that paragraph?

Richard Allan: No, I don’t agree with that characterisation at all. The people I work with and know are very focused on trying to do the right thing. Again, I don’t want to hide the fact that we are trying to grow and develop our business—that is not something we shy away from—but I have been involved in numerous discussions where issues of significant importance have come up and been debated fully and thoroughly, and we have taken action. Again, some of those actions have taken time to filter through, but perhaps the most significant that I have seen is when I started in the company—we were a company of a few hundred people, and it was difficult for us to get across a lot of the issues of safety and content.

Bob Zimmer: What were the quarterly profits from 2018? Can you state what those are for the room?

Richard Allan: I would need to give you that number. I do not have it to hand.

Bob Zimmer: It is around $13 billion.

Richard Allan: Thank you.

Bob Zimmer: We had a previous conversation in this Committee—I think somebody referred to Facebook as a high school platform, but I referred to it as making grown up dollars. I would challenge you. You seem to want to deflect, deflect, deflect, and at the same time you are collecting billions of dollars. We represent 400 million people, and yet there is an empty chair for Mr Zuckerberg. What do you say to the 400 million constituents that we represent to show you are taking that seriously?

Richard Allan: I would say that we are making the investments. I am sure you follow our stock price as avidly as we do, and one of the reasons that it is where it is—

Bob Zimmer: Mr Allan, you just told me that there are other, bigger issues involved in elections campaigns than what causes them to be won or lost. You are still downplaying the role that Facebook has in this situation. There are 23 million users in Canada, which has a population of 36 million. Facebook is a huge player on the global scheme. Just multiply that figure around the globe—you have 2 billion users. You still don’t seem to have a grasp on how much effect and influence you have on election campaigns.

Richard Allan: I actually don’t accept that characterisation. To be clear: we have told our investors and the market that we are making significant investments in infrastructure and people. We now have, I think, 30,000 people working on safety and security.

Bob Zimmer: We just saw that you still had to pull down 115 pages during the US midterm elections. How long did they have an influence on that election? They were pulled down two days before the election happened. There were two media reports of Cambridge Analytica—the group that is supposed to be banned from Facebook—still running ads under that banner for two weeks. It was not caught by Facebook, a company that with quarterly revenues of $13 billion. How do you answer that without just another apology?

Richard Allan: We are making those investments. This goes to the heart of the problem. We now have a world-leading security team who are finding and catching those people and taking them down, but if we tell you about it, you say, “How did they get on there?” They get on there because they are cunning and clever, and we need to stay ahead of them. I think we are in a much better place today, sitting here in front of you, than we were two years ago.

You will have an election in Canada next year, and there will be problems. People will cheat and work round the system, but we will catch most of them. I think you share our goal: the Canadian election should not be seen to have been unduly influenced by online activity through our platform. We absolutely share that goal, and we are making the investments. We cannot guarantee that there will not be instances of people cheating that we catch and have to take down during the campaign. We would love to work with you and the authorities in your countries on getting better at identifying those people and taking them down.

Q4245  Chair: Sun Xueling.

Sun Xueling: Mr Allan, I am here with two of my parliamentary colleagues from Singapore. It is a small country with 5 million people, but we have a highly digitally connected population. Online falsehoods can swiftly destabilise a small country such as Singapore, affecting different aspects of our country: our national security, racial harmony, democratic processes, social cohesion and trust in our public institutions. We have found it necessary to travel halfway across the world today to ensure that our voices are heard. Facebook, a tech giant, should start taking responsibility for online disinformation. How is Facebook policing the setting up of fake accounts and the shutting down of those fake accounts and their networks?

Richard Allan: I appreciate your making the long journey here to have this discussion; I hope that it is useful. The shutting down of fake accounts is an ongoing battle that we have. As we understand it, most fake account creation is done with commercial, not political, intent. You will have seen that people want to create fake accounts to sell them: they sell followers, and they are trying to create networks that they can use for the purpose of pushing out spam messages. We have explained that we will take down hundreds of millions of such accounts over a three-month period. Most are taken down within minutes of account creation. The best way I have found to describe this is to say that it is a bit like a robot war. People have created programmes on computers that create millions of fake accounts on Facebook and other networks—they are blasting away. We have artificial intelligence systems that try to understand what fake accounts look like and then shut them down as soon as they come up.

That is the mass of fake accounts. More insidiously—this is perhaps more relevant to the political question—there are people who are very careful to create one or two accounts and then act as though they are normal Facebook users. The issue that we saw in the United States with the Internet Research Agency often involved such accounts. There was no mass creation; they had very carefully curated somebody so that they look real, even though they were not.

Sun Xueling: Given that Facebook looked into the issue of Russian interference about six months or a year after the event—even as recently as August to October this year—and that it is still removing fake accounts linked to Russia, do you agree that Facebook’s playing catch-up to stem online falsehoods is reactive and perhaps insufficient?

Richard Allan: We are trying to get better all the time. I would point to some research that we shared with the Committee, which shows that low-quality information has decreased by 50% across the Facebook platform due to a number of measures that we have taken. Those are independent studies by academics and by Les Décodeurs, which is the fact-checking arm of Le Monde in France; those are not our studies. People have looked at it and seen a significant decrease. The battle is not over, but we do believe we have started to make inroads.

I want to come back to the specific question of somebody who very carefully curates a fake account. Those are perhaps the hardest things to do. They use technology—they use what they call a VPN—to appear as though they are coming from a different country. They get hold of photos that look very legitimate. That stuff is hard, and that, frankly, is where we often need a lot of co-operation with law enforcement agencies so we can understand what is going on and try to deal with those people.

Sun Xueling: Given the evolving modus operandi of its adversaries—I think Mark Zuckerberg called them well-funded and sophisticated, and you used the words “cunning” and “clever” earlier—do you agree that it is possible that future elections could likewise be interfered with by methods that Facebook will be able to discover only after the fact?

Richard Allan: As I replied to Mr Zimmer, I think it is the case that we will continue to discover groups of people who are doing things that they should not be doing at election time. Our job is to minimise that as far as we can, but it is unrealistic, as long as we have an internet, to think that there will nowhere be any kind of attempt to interfere. The technology does give people very strong tools to use against us.

Sun Xueling: How is Facebook prioritising credible content and deprioritising falsehood while it continues to push content that readers want to see? You are potentially creating online echo chambers and amplifying falsehoods.

Richard Allan: There are various things we are doing. I will try to explain them briefly, because I know we have limited time. One of the major changes we made was something called meaningful social interactions, to prioritise content that comes typically from your family and friends. That kind of content tends to be less controversial than some of the content that comes from other sources.

Sun Xueling: But that could create online echo chambers, right? People are sharing information that they want to see among a group of people they are close to—self-selected individuals.

Richard Allan: There is some good research that shows if you have a reasonably broad family and friends group, you actually get more diverse content from a group of family and friends—that is certainly my experience on Facebook—than you would if you were simply going to the same restaurants and bars you normally go to and dealing with only one group of people. That is one piece.

A major thing we have done is to try to deal with so-called ad farms. It is really important to understand that, with a lot of fake news, they just want you to click on it. You then land on a page that has 20 ads, and the person gets 0.1 cents for your landing on that page. We are now trying to follow links through and understand whether that is the landing page. If it is, we are trying to prevent those links from being shared.

Sun Xueling: To go back to an earlier point, do you agree that more would be achieved if Facebook worked with the relevant authorities to take down false information online and shut down accounts?

Richard Allan: Absolutely, and I know you are looking at a piece of legislation at the moment. One of the questions that is raised with us is why we don’t just decide to take down false content. This is where I want to be clear: we do think it is important that there is some kind of judicial process in place. I know France has just passed a similar law. If someone claims that a politician is corrupt and that is false and we do not take it down, that is a problem. If it is true and we do take it down, that is equally a problem, because we have stopped somebody shedding light on a genuine harm. The best person to make a decision about whether that claim is true or false is not Facebook or a Facebook employee; it is the relevant judicial authority in any country.

Sun Xueling: But do you agree that Facebook needs to take into account the potential ramifications of online falsehoods for society? If those falsehoods are allowed to pervade, that slow drip of poison can lead to huge ramifications in society. Do you agree?

Richard Allan: Absolutely. If content is linked to real-world harm, our policies mean we will remove it today. The harder area, which I know you are discussing and the French and others are discussing, is where content is not obviously linked to real-world harm—when we would take it down—but is a claim that people feel is disruptive and disturbing and is characterising an election.

Sun Xueling: I would like to end with this question. Will Facebook be open to adopting a regulatory approach, working with Governments to develop solutions and undertaking voluntary reporting and independent audits?

Richard Allan: Yes. We are keen to see a properly regulated structure. Again, there is detail in there—that does not necessarily mean turning up and saying, “This is exactly what we want. We will agree to everything.” Again, part of what we are trying to do in the process of working with the French Government is to understand how we can work together with regulators.

Q4246  Chair: We have to move on. Edwin Tong.

Edwin Tong: Good afternoon, Mr Allan.

Richard Allan: Good afternoon.

Edwin Tong: Would you accept that there is no place for any content on Facebook that attacks people on lines of race, religion or ethnicity?

Richard Allan: Sorry, I am being distracted by something coming in. Please could you repeat it?

Edwin Tong: I have extracted some materials so that we can move along more quickly. I will repeat the question. Would you accept that there is no place for any content on Facebook that attacks people on lines of race, religion or ethnicity?

Richard Allan: Yes. Our policies are very clear that that is not acceptable.

Edwin Tong: In fact, your policy is very clear that when such attacks are made you are committed to removing it any time you become aware of it. Would that be right?

Richard Allan: Yes.

Edwin Tong: If you go to page one of the extract, you will see Mr Zuckerberg’s Facebook extract from August 2017. I will read you the portions that I have highlighted. He says:when someone tries to silence others or attacks them based on who they are or what they believe, that hurts us all and is unacceptable. There is no place for hate in our community. That's why we've always taken down any post that promotes or celebrates hate crimes or acts of terrorism”. Would that be correct?

Richard Allan: Yes.

Edwin Tong: And you do so because you are aware that such content has the potential to divide communities and incite violence, tension, hatred and strife. Would you agree?

Richard Allan: Yes.

Edwin Tong: Right. Would you go to the second page of that extract? I think you will be familiar with what has happened in Sri Lanka recently in March this year.

Richard Allan: That’s right.

Edwin Tong: The post that was put up originally is the part that is in pink. It’s in the Sinhalese language, the language of the native Sri Lankans, and the post translates to: “Kill all Muslims, don’t even let an infant of the dogs escape.” That would be properly characterised as hate speech.

Richard Allan: Clearly in breach of our terms of service.

Edwin Tong: It’s a clear breach, isn’t it?

Richard Allan: Yes.

Edwin Tong: It was then put up at a time when there were significant tensions between the people in Sri Lanka and Muslims, causing damage to property, deaths even, riots, damage to mosques, and eventually it resulted in the Sri Lankan Government declaring a state of emergency in Sri Lanka. Would you agree?

Richard Allan: Yes.

Edwin Tong: Would you agree that in the context of that kind of tension occurring in Sri Lanka, putting up such a post would invariably travel far, stress those tensions even more and divide the community?

Richard Allan: Yes, that’s high priority content for us to remove.

Edwin Tong: If you look at the page, it was then pointed out by one of your users on Facebook and subsequently picked up by Harin Fernando, who was the communications Minster of Sri Lanka at the time. Why is it that Facebook has refused to take it down?

Richard Allan: The comment should be down. If it’s not down, it should be. What I am seeing is that there are two possible reasons why the content was not taken down at the time. One is simple error on the part of the person who looked at it.

Edwin Tong: Let me stop you there. Go over the page of the second document and you will see the response by Facebook when you were asked to take it down. It says, “Thanks for the report – you did the right thing.” You have looked over the post, so there was no mistake, Mr Allan. It says:it doesn’t go against one of our specific Community Standards.

Richard Allan: That was a mistake, though. I want to be clear that somebody has made a mistake in the review.

Edwin Tong: Mr Allan, this is a very serious, egregious mistake. Would you agree?

Richard Allan: I agree, yes.

Edwin Tong: It goes completely against your own policy—

Richard Allan: That’s right.

Edwin Tong: To take down immediately.

Richard Allan: Yes.

Edwin Tong: So would you accept that this case illustrates that Facebook cannot be trusted to make the right assessment on what can properly appear on its platform?

Richard Allan: No. We make mistakes.

Edwin Tong: A serious one.

Richard Allan: A serious mistake. Our responsibility is to reduce the number of mistakes. I still think that we are best placed to actually do the content removals. That’s why we’re investing very heavily now in artificial intelligence where we would precisely create a dictionary of hate speech terms in every language. We are working through the languages. The best way to resolve this is a dictionary of hate speech terms in Sinhalese that gets surfaced to a Sinhalese-speaking reviewer who can make sure that we do the job properly.

Edwin Tong: Well, Mr Allan, in this case, while one excuse might be that your reviewers don’t understand Sinhalese, you had the communications Minister of Sri Lanka telling you that this is hate speech and to take it down. You review it; your people review it. You said hundreds of thousands of people review it, but they don’t seem to abide by the same philosophy as you have expressed in your own policies.

Richard Allan: Again, we make mistakes. Our job is to reduce the number of mistakes. I completely accept, as we have discussed previously, that we should be accountable for our performance to you, your colleagues, to every Parliament and Government sitting round this table today.

I would love to be able to explain as part of that process how we do what we do, the challenges of getting it right, the challenges of doing it at scale. Not because I expect sympathy, but because it is important to understand, if we are going to solve the problem, what it is we need to do to solve it. I think that will be a combination of us getting better at our jobs, using better technology, and frankly the accountability of you and your colleagues standing over us and making sure that we do it correctly.

Edwin Tong: In this case, the post only came to a halt when the Sri Lankan Government blocked Facebook.

Richard Allan: Yes.

Edwin Tong: Do we have to get to that kind of measure?

Richard Allan: No. We would much prefer not to.

Edwin Tong: How would Governments trust that Facebook would live up to its own promise?

Richard Allan: Again, I think this is where the openness has to be there. I hope that you have a constructive relationship with my colleagues in Singapore who work on these issues. I want us to be in a position where we share with you the good and the bad about how we think we are doing, in full expectation that you will always be pushing us to be better. That is the right process.

Edwin Tong: We look forward to that because what has happened, by way of example with Sri Lanka and several others, should not be allowed to happen ever.

Richard Allan: No, and as an employee of Facebook, I am ashamed that things like this happen. They do, and they shouldn’t.

Q4247  Julie Elliott: I am quite aghast that you would refer to something as serious as that as a mistake, quite flippantly waving your hands. I find that quite offensive, I have to say.

Richard Allan: I didn’t mean it to be flippant. It was a mistake, yes.

Q4248  Julie Elliott: I think it is a bit more than a mistake. There was something you said earlier. You said that most fake accounts are commercial not political.

Richard Allan: That’s right.

Q4249  Julie Elliott: That might be correct, but political accounts that do this have had an influence on outcomes of elections and on destabilising democracies. Do commercial accounts have that effect?

Richard Allan: Generally not. Again, to be clear how it works. For people who are creating commercial fake accounts, the phrase is clickbait. They just want you to click something so that they can make money. The classic scenario is that Johnny Depp, or some celebrity, moved into your town, because people will click that.

At certain times, politicians become very clickable. Obviously, in the United States election, both Donald Trump and Hillary Clinton reached that status. So, the people who are trying to make money, start using Donald Trump and Hillary Clinton headlines, not in that case because they want to influence the election but because they want to get the clicks. The fact that that is political will, I guess, have some influence on the election and that is what people are now studying. Again, not to diminish it, I am just trying to explain that mechanism.

There is another whole set of people who are pushing out false stories, with a deliberate political agenda. They don’t care about you clicking on the stuff; they just want to get it out there. That for me that is politically a more insidious problem, even if the first category might be greater by volume.

Q4250  Julie Elliott: It is not just a political problem; it is a problem of destabilising democracy. How does Facebook define political advertising? How do you define what is a political advert?

Richard Allan: At the moment, this is one of the areas where we would really welcome a discussion with policymakers who are crafting the laws on this. At the moment, the system we have implemented in the UK says, “If you are advocating for a party or a candidate or talking about an issue that is in front of the legislature, we will classify that as political.” That is imperfect; we recognise that. It is our best guess.

Q4251  Julie Elliott: So, it doesn’t have to be from a political party?

Richard Allan: It does not, in this case, have to be from a political party.

Q4252  Julie Elliott: How do you monitor in the case of the UK, where political advertising is very highly regulated? We don’t have a system like the States. You are very well aware of our system. How do you monitor that kind of grey political advertising that is going on and is undoubtedly impacting our electoral processes in this country?

Richard Allan: The system that we are implementing right now —and, again, we will learn as we go—is where an advert mentions Jeremy Corbyn or Theresa May or something that looks political, it will get sent to a human being who will review it, and they will make a determination. If they think it is political campaigning, they will go back to the advertiser and say, “You need to register as a political advertiser.” That is the system that we have got at the moment, again, recognising that it is not perfect, but it is our best effort.

Q4253  Julie Elliott: How many people have you got doing that?

Richard Allan: We are going to have hundreds of people working on that.

Q4254  Julie Elliott: How many people have you got now?

Richard Allan: Recently, in the context of the US election, we had thousands. We don’t know yet for the UK, is the honest answer, because we don’t know how many ads are going to be surfaced. This is entirely new. We will have as many people as we need to review the ads that get surfaced to us.

Q4255  Julie Elliott: What percentage of Facebook’s moneys is being spent on dealing with this problem? And this problem, as I have said, is destabilising democracies. What percentage of your budgets is being used on this?

Richard Allan: I can’t give you a percentage, but I can tell you that it is significant. There are—

Q4256  Julie Elliott: Can you write to us and tell us?

Richard Allan: I can look. I am not sure that it will be a percentage. I can tell you the size of the teams—how many people we’ve got involved—and I will undertake to follow up on describing. It is a very major effort that is taking place inside the company. There are—

Q4257  Julie Elliott: But a percentage is not telling us any kind of sensitive commercial information. I am not asking you precisely to the last penny what you are spending on this. What I am trying to get at is how seriously as an organisation you are taking the issue.

Richard Allan: I will follow with information that I hope persuades you we are taking it seriously.

Q4258  Julie Elliott: Apart from having something referred to someone, what other checks and balances are you making about where the money is coming from that is funding the driving of this information, this advertising?

Richard Allan: To be clear, as Facebook, we will get the payment information of the person who pays us. We do not have visibility into where they eventually got the money from. If there are questions around this, and we know that this Committee has been looking at it in the context of some of the UK campaigns, we think the best way to explore it is through authorities like the Electoral Commission. We have a very good working relationship with the Electoral Commission here. We have done similarly in countries like Brazil, which had elections recently, and Mexico.

Q4259  Julie Elliott: If you are in the middle of an election period, whether it be a general election, a local election or a referendum in this country, there is a very small timeframe in which money can be spent, money can be raised and money has to be declared. Reporting something to the Electoral Commission or to any number of these bodies, does not fall into that window, so how are you checking where that money comes from?

Richard Allan: What we are doing is checking that the advertiser is in the United Kingdom. In fact, your political party will have to sign up. I have been through the flow. When you register and say that you want to run political ads, we will check whether your Facebook account is normally used from the United Kingdom. If it is, we will give you temporary access to the system. We will then send you a letter with a code. If you want to advertise politically, you need to put the code in. You also need to give us your driving licence or your passport. So we have collected all that information—a very significant barrier—

Q4260  Julie Elliott: But to be fair, big political parties are going to work within the rules. The people we are concerned about are the foreign agents trying to influence our democracy, which has clearly been happening. We want that to stop. So what checks and balances are you putting in place to ensure that you do as much as you can to stop that happening?

Richard Allan: The checks I have described apply to everyone who is trying to run a political campaign ad. If you are a Russian agent who wants to run a pro or anti-Jeremy Corbyn ad, when you try to run the ad we are going to make you go through that flow. If you have fake identity documents and a fake account it is possible, I can see, that you could get through that, but then we will also have significant information that could be used with the appropriate authorities to prosecute you. So we are making it much harder.

Q4261  Julie Elliott: Like what?

Richard Allan: We will have the identity information they provided. We will have the payment information they provided. We will have our own record of what we know about that account. If there is an investigation, it is going to be pretty unwise if you have hostile intent or mal-intent, to go through our flow and present all that information to us.

Q4262  Julie Elliott: Can I go back, finally, to something you said in answer to a question a little while ago, about if somebody puts something online that says someone’s corrupt and it’s not true? It was unclear how you thought that should be dealt with. How would you deal with that? How would you get that taken down? Reputationally, if it’s not true, it is very damaging to somebody. How would your company deal with that?

Richard Allan: If it is a simple claim and we are not able to establish the truth—

Q4263  Julie Elliott: If it is a claim that is absolutely provable as not true, if it was me, what would I do?

Richard Allan: You would come to us and if you had documentation to prove it wasn’t true, if you had a claim of defamation, we might well restrict access to the content because we think you have a valid claim of defamation.

Q4264  Julie Elliott: So you would have to go through the court process to get that?

Richard Allan: Not necessarily. The claim could be a claim that hadn’t been through court. We see the system in Brazil, under the Marco Civil, as working very effectively. There they have courts, and I think at election time the courts sit 24/7. If somebody has an issue, the judge issues an order and the order comes to us. It makes it really simple.

Q4265  Julie Elliott: But I am not in Brazil, so what would I do?

Richard Allan: You would send us a solicitor’s letter. Our legal team would look at it, and if we think that you have a colourable claim for defamation, we would be likely to restrict the content on that basis. That is not ideal, and we may then get a counter-claim from the other party. Our preferred system—I expect that the law will evolve over time in this direction; I think that this is actually the thinking in Singapore, but with a Minister involved—is that an authority in a country looks at a case and issues a direction, and we can follow that direction. If the person whose content it is wishes to challenge it, they challenge that authority. We do not want to be in the middle of a “he said, she said”.

Q4266  Julie Elliott: You don’t think you have a responsibility?

Richard Allan: We have a responsibility once it is illegal, but I actually do not think it is very democratic, frankly, that we, as a private company, should decide a “he said/she said” between two people. A court should decide that. That is the tradition around defamation, and I think that that should continue.

Q4267  Chair: We will have to move on. Nele Lijnen.

Nele Lijnen: Do you know the expression, “sending your cat”? I am from the Flemish part of Belgium, and in my language it means not showing up. We can state, dear colleagues, that Mark Zuckerberg has sent his cats to us today. To Canada, Singapore, France, Belgium and the United Kingdom, he has sent his cats. Earlier you stated that Facebook would like to stop being in a confrontational mode. Do you think that Mark Zuckerberg’s sending his cats—not showing up—today will help him to get out of the confrontational mode?

Richard Allan: I think engaging in the debate gets us out of the confrontational mode. I hope I am able to assist as a cat.

Nele Lijnen: No, you are sitting next to the cat. He sent his cat. I represent more than 6 million Facebook users in Belgium; more than half the population. Facebook was convicted in Belgium because it was following the activities of Facebook users on other websites without their consent. Basically, it is monitoring our activities on the web without our knowing. This is a clear invasion of our privacy and of the GDPR. Are you still doing it?

Richard Allan: I am very familiar with that case, which is long running. It concerns what are called social plug-ins—when a piece of Facebook is on a third-party website—and the extent to which that data can be returned to Facebook. We believe that that is a legal activity. It is actually now governed by the GDPR. I think that is accepted, including by the Belgian data protection authority. I am sure it will be returned to.

It has much wider implications. Again, it is not just a Facebook thing. Many websites around the world now fund their free content through advertising. The advertising providers—not only Facebook but all of them—use these cookies in order to create their business model. It may be that the European Union, through GDPR and the ePrivacy directive, decides to limit that. If it does, we will follow the law, but that will have profound implications. I do not want to hide that. Belgian publishers, like everyone else, depend on this advertising ecosystem, so it will have a profound implication. That is why we think it is worth testing in a lot of detail.

Nele Lijnen: I also represent millions of non-Facebook users. I think it was my colleague from France who earlier asked you some questions about what you do with non-users’ data. You stated, if I heard you clearly, that the non-users’ data is still sitting on Facebook servers.

Richard Allan: For a period of time. Again, just to be clear, there are two categories. If I upload my contacts to Facebook from my device, they will sit associated with my account for me to make friends with my contacts. There are also log files, which are technical files that any provider gets that say an IP address and a browser reached one of the properties that had Facebook content on it. Those log files are deleted periodically. There is a routine where those log files are removed. People argue that that is us holding non-user data, because it has an IP address.

Nele Lijnen: I would argue that too. It is not GDPR consent. I must ask you if you can ensure that Facebook is not doing this in Belgium or in Europe, or are you doing it all over the world? Is it specifically domestic, or are you doing it all over the world? We have that GDPR legislation.

Richard Allan: Our systems are, we believe, compliant with GDPR generally. I think that there are some interesting questions that have yet to be tested because the GDPR is new. We will fully understand what it means, what consent means, and what all these questions mean, over the next two or three years. I am confident that a number of the test cases will be around Facebook, but it is also important to note that the technologies we use are pretty industry standard, and so if GDPR and the ePrivacy directive—the other important piece of European legislation—find that certain practices are illegal, they will be illegal for us and everyone else. At this stage, we believe that everything that we are doing is compliant.

Nele Lijnen: Does Facebook have two faces? On one hand, it prints large advertisements in newspapers to tell us how it has changed, but on the other hand, it hires PR firms to smear competitors and critics, as we have learned from reporting in The New York Times.

Richard Allan: I discussed earlier that I thought the reporting was deliberately selective in terms of picking information that would be damaging to us. That is fine—people are perfectly entitled to do that. But on the specific piece about the firm that was hired, our CO has gone on record and said that that is not the way that he expects us to work, and a senior executive has admitted in public that he was responsible and we intend to make sure that that does not happen in the same way again.

Nele Lijnen: Facebook is telling us that it is taking new measures to provide users with tools and to combat fake news on its platforms. However, it is unclear how we will know if those efforts have been effective. How will Facebook provide transparency on the use of those measures and the technology and algorithms behind those tools?

Richard Allan: One of the important areas that we are working on that I think has come out of the helpful attention on the fake news subject from this Committee and from other Parliaments is a big push around academic study in this area. I think that is helpful. There are some studies that are already being done independently—I quoted a couple earlier; I think it was the University of Michigan and Les Décodeurs in France—and there are also academics who want to be able to work with us and we have a programme to work with them.

Going back to the other theme of the debate, which is privacy and third-party access to date, there are questions there and we need to make sure, in wanting to work with academics to understand fake news, that we do not end up sharing data in ways that people would find inappropriate. That is the challenge and one that we are working on with the academics themselves. Just as a reminder, Dr Kogan was an academic at the time that his application collected the data, so we cannot just say, “If you’re an academic it’s okay”. You cannot have a free pass, and we need to work out a protocol where we can work with reputable academics to understand the problem better.

Q4268  Chair: I think we will to move on because there are few other members who want to come in and we don’t have much time left. Pritam Singh.

Pritam Singh: As a matter of public interest and policy—today specifically—how is Facebook’s heightened vigilance and monitoring of anonymous users, especially new sign-ups and posts on your platforms, in the run-up to elections and during elections, post-nomination?

Richard Allan: On the levels set on Facebook, our expectation for the Facebook service remains that anybody who signs up does that using their real name and their real identity. That is core to our service. Other services operate differently and that is fine—we are not criticising them. The nature of Facebook is that it is intended for you to connect with your real family and friends, and if you are sharing photos of your kids with someone, you want to know who that person is. We maintain that policy very strongly. There should not therefore be any anonymous users, and if people are reported to us, and it looks like the account is not real, we will put them in what we technically call a “checkpoint”. The next time they try and log on, it will say, “Hey, we need some more information for you to prove your identity”. That is our design philosophy, which is that nobody should be on there anonymously, whether for political purposes or otherwise. We build systems to try to detect and prevent that.

Pritam Singh: Are there any other initiatives that Facebook is following up with, in the light of what has been happening and of the conversations happening in legislatures around the world about fake news, as to how Facebook can deal with the prospects of elections being—to put it mildly—tampered with?

Richard Allan: One of the things that we have implemented—you may have seen some press coverage on the US version of this in the midterms—is the notion of a war room around elections. The best way to describe it is as a taskforce that we now create for every significant election that comes up. That taskforce consists of security specialists, policy specialists, legal specialists and operational specialists. Their job is to understand the specific risks of that election, working with outside bodies, often in the country, and deploy whatever tools and technology we need to deal with those risks. That is really the strongest signal I can give you that—to go back to an earlier question—we have intent. Whatever resources those taskforces need to do the job are made available to them.

Pritam Singh: You said “significant elections”. Does that mean that countries that have a small footprint vis-à-vis Facebook’s 2.2 billion users would also fall squarely under the war room concept that you spoke of?

Richard Allan: That’s right. We are building it out. In an ideal world, it would be every election everywhere, all the time. I think our current resourcing allows us to look at all national elections, so if there is a national election in Singapore, for example, it would be covered. The area we are now looking at is whether we can also cover significant regional elections within a country. We are building it up. At the moment, when I look at the schedule of future elections, I am very confident that we would reach any national election. I know that Latvia is represented here, and we had a similar taskforce around the Latvian election. We are looking at every election at a national level, whether the country is big or small; the question is whether we can also extend that to regional and local elections.

Pritam Singh: Let me end with a final question, in the interests of time. Would Facebook consider working actively with local election authorities and even political party representatives to remove or flag posts that would compromise the political process and, by extension, the voters’ choice?

Richard Allan: Not only would we be willing; we think that is essential. If I have not said it enough, let me repeat that the people who decide if an election is free and fair are you and your authorities and the political parties. We want to do whatever is necessary in order for everyone to have the confidence that the election was free and fair, and we cannot do that on our own. We can make tools and we can work with you, but ultimately we need to engage with you in order to meet that shared objective that we contribute positively rather than negatively to the election in your country. It is essential.

Q4269  Chair: Thank you. Leopoldo Moreau?

Leopoldo Moreau: Gracias, Presidente. Yo voy a formular preguntas y comentarios en castellano, como una manera de corroborar la afectación que se produce a la democracia global como consecuencia de este tipo de prácticas y tropelías que se hayan cometido.

[The interpreter for Mr Moreau said: “He is going to formulate the questions in Spanish and I am going to translate them. He wants to speak in Spanish because he wants you to know that these practices and problems are being suffered everywhere in the world, so—”.]

Richard Allan: Ya está bien, que entiendo castellano.

Leopoldo Moreau: Ah, muy bien. En primer lugar, quiero decir que en mi país, cuando se conoció el caso y el escándalo de Cambridge Analytica, además hubo una denuncia de la filial de Amnesty International respecto a campañas que se habían llevado adelante—con la misma forma y la misma práctica, a través de Facebook—contra periodistas independientes. Esto significó que la Comisión de Libertad de Expresión que yo presido, citara a representantes de Facebook y de otras redes sociales a los efectos que dieran explicación sobre este tema. Y si bien es cierto hoy que yo comparto la frustración de todos nosotros por la ausencia del señor Zuckerberg, quiero decir que de alguna manera estamos contentos, porque nosotros en Argentina no logramos que Facebook se presentara—ni siquiera tuvimos el hombre invisible en la Comisión.

[The interpreter for Mr Moreau said: “In Argentina, we had a report from Amnesty International that suggested that there were large account farms with fake identities. The Commission that Leopoldo presides over called upon Facebook representatives. He regrets that Zuckerberg is not here today, but he wants you to know that he feels lucky, because in Argentina no one from Facebook showed up for the Commission.”]

Leopoldo Moreau: Por eso queremos aprovechar la oportunidad para saber con quién debemos comunicarnos de Facebook en Argentina, porque nos informaron que no tienen representación.

[The interpreter for Mr Moreau said: “Did you get that?”]

Richard Allan: So the question was about our representation in Argentina. First, I can only apologise. We do have a significant office in Argentina.

[The interpreter for Mr Moreau said: “You have an office in Argentina, but they wouldn’t give any answer and they said that there was a representative from Latin America who was supposed to answer and to show up, but we didn’t get any answers.”]

Richard Allan: I will take this now and find out what happened and make sure we have answers to you. That is all I can offer.

Leopoldo Moreau: Quisiera preguntarle qué políticas están pensando implementar para evitar la proliferación de noticias falsas a través de WhatsApp. Hace poco tiempo atrás, pocas semanas, en el país amigo de Brasil, hubo elecciones y hay reportes que se han utilizado esas redes—concretamente las redes del WhatsApp—para difundir noticias falsas. Esas redes representan un problema, porque es difícil detectar el origen de esas noticias y el impacto que tiene sobre las redes. Quiero decirle que en mi país nos estamos aproximando también a elecciones y hay grupos de inescrupulosos o empresas fantasmas que se están acercando a los partidos políticos para ofrecer el mismo servicio, es decir la difusión de noticias falsas a través de WhatsApp. Quisiera saber qué tipo de medidas se están tomando en relación a esto, sobre todo en la utilización de los grupos de WhatsApp.

[The interpreter for Mr Moreau said: “What policies are you planning to implement to prevent fake news proliferation through the WhatsApp platform as it has been widely reported happened in the Brazilian elections on a massive scale this year, especially since there is no way to trace the origin or impact of such content in the platform? In Argentina, some groups are already starting to offer this service—not directly from Facebook or WhatsApp but as intermediaries. What are you going to do about that?”]

Richard Allan: To be very clear, for those who don’t use the technology, messaging services like WhatsApp—WhatsApp is not unique—create a different set of questions for political communication from advertising.

To be clear on WhatsApp's rules, WhatsApp is intended as a person-to-person messaging service. Yes, you can create groups as well that are useful, but it should not be used for spamming people—

[The interpreter for Mr Moreau said: “I know it was born as a chat platform. Today, you have business APIs. You can use it for mass messaging. There are enterprises that are doing that, legitimately. It has been said that in Brazil, those tools—those APIs—were used for campaigns. Maybe the problem is that it is not being used in politics as a one-to-one communication tool.”]

Richard Allan: You are ahead of me. I wanted to explain. Where people are illegally buying lists of WhatsApp numbers—a member referred to shadowy companies. If shadowy companies are promising to circulate information across WhatsApp where they have collected lists of numbers, that is against our terms and that should stop. We are implementing proper business connections. With a proper business connection, that will be regulated by us, as in you would have to be behaving appropriately, legally and so on to do that. I want to be clear that the immediate solution here—I know in Brazil it was an issue—is that if people are aware of shady companies promising to do illegal WhatsApp marketing against our terms, tell us and we will take action against them. That should not be happening.

[The interpreter for Mr Moreau said: “But if it is easy to stop it, how come you did not stop it in Brazil? How are you going to stop it when it comes to open groups?”]

Richard Allan: We may get on to the Brazilian question, but where we were made aware of it, we did take action. We are conscious now—your raising the question is very helpful—that it is an area that we need to give more priority to. I am just in the process—we are building WhatsApp into those election taskforces that I talked about. We recognise that conversations around WhatsApp and how that is used—and other messaging services, to be clear—need to be part of the whole election protection effort.

Leopoldo Moreau: Que se apuren.

[The interpreter for Mr Moreau said: “If you are working on it, hurry up, because elections are coming real soon.”]

Q4270  Chair: We will have to move on. That sets up Alessandro Molon perfectly.

Alessandro Molon: Thank you, Chair, for the kind invitation and the honour of representing Brazil here. Like other nations, we have recently felt the hindering impact of disinformation and fake news in our electoral process. It has become a serious threat to modern democracy—perhaps the biggest—and it has been boosted by the internet. When the press emerged, it was without any rules. Later they had to be created, as well as for radio and television, with each country devising its own set of legislation. The internet also needs regulation so that it can protect institutions and through that, fundamental rights. In Brazil, we approved the Marco Civil da Internet, our internet bill of rights. The same democracy that allowed social media to flourish should now be protected by social media.

There is, however, a unique challenge. The internet does not respect borders, which poses a challenge to national legislation. There are two possible paths to be followed. Countries either establish authoritarian control over rights to be shared on the internet, which is a bad way, or they advance the creation of a set of rules common to all countries. We all choose the second path, and that is what we begin to do here today with the Chair. Internet companies and social media companies need to work in partnership with Governments and search for solutions that preserve our democracies and people’s rights around the globe, but unfortunately we still cannot see that happening. My first question to Mr Allan therefore is: what is Facebook doing to avoid the improper use and manipulation of its algorithms to illegally influence elections?

Richard Allan: There are two major changes. One is a set of changes we have made around the way that the algorithm searches information generally. The independent studies have shown that that has dramatically reduced the amount of information classed as low-quality. That is broadly helpful but specifically helpful in the context of politics. The other thing is that I point you to the recent announcement by my friend the cat here, who published a post on our approach to content on the platform. In Mark’s post, he described how we have recognised that there is a category of borderline content that is not banned under our rules, but is close to being banned. They may be getting excessive distribution and rewarded by the algorithm because it is sensational. Rather than rewarding it, we should reduce it. He is committed to a path to work on that. I think that will have a significant positive impact.

The final element, where we have been working hard, is working with fact checkers, third parties—not Facebook—deciding on the truth of statements. Not everybody is happy with that. There are different views about which fact checkers we should or shouldn’t use. But that is the other mechanism that we have been testing and think could be helpful. If a piece of content is marked as false that may reduce its distribution. All of these things are about trying to shift the balance between higher quality, less sensational content and lower quality, more sensational content in the algorithm.

Alessandro Molon: During the most recent Brazilian elections, only six days before voting day, 68 pages and 43 profiles were deleted from Facebook all at once, for violating spam and authenticity policies. Together they formed the most influential network supporting one of the presidential candidates, causing a huge impact on social interaction. In one month, those pages and profiles reached more than 12 million interactions. However, they were removed only after a newspaper brought this case to light. Why didn’t Facebook act before or bring this to the attention of the Brazilian judiciary, regardless of the candidate?

Richard Allan: This scenario is again not unique. There were similar situations in the mid-terms, as well. These scenarios are some of the hardest ones for us to deal with and, again, this is where we need additional regulatory guidance. The kind of activity that we see is people organising together through Facebook, in this case often using authentic accounts. So, they are not fakes but they have organised together to promote their content through our systems by pushing it out into multiple groups and on multiple pages.

That is gaming the system at one level. If it results in the content being seen as spam, we may act against it. But there is another interpretation that says, “Hey, that’s political campaigning.” So, it is about understanding where we draw the line. If your political party sends a message to 100,000 of your followers saying, “Please post this on Facebook,” that kind of feels legitimate. If it is fake accounts doing that, it is definitely illegitimate. There is a lot of activity in that grey zone in between, where it is not obvious to us that it is a problem. Sometimes on investigation we find actually there was some kind of inappropriate co-ordination or the people behind it were fakes, even though most of the people sharing the content were not.

We could talk about it in more detail, but this is one of the areas that I am thinking about at the moment where I am very sensitive to the fact that legitimate campaigning and illegitimate activity can often look quite similar to each other when carried out through networks of party supporters.

Alessandro Molon: Over the years, the company has changed its algorithms several times to privilege what seemed the most interesting and profitable for the business. Recently, Facebook started to privilege engagement and interaction on the platform, as you stated here today, Mr Allan. That is also achieved by fake profiles, which are specially used for this purpose also during elections. To what extent is the maintenance of fake profiles interesting to the platform? How can we be sure Facebook has a deeper commitment towards democracy than towards its profit?

Richard Allan: I want to be very clear on fake profiles. As I described earlier, the whole premise of our service is, uniquely on the internet, that if I join Facebook I make 100 contacts and I have a high degree of confidence that those contacts are who they say they are. There are other spaces that are more anonymous but Facebook is not. It is a real-identity space. We have estimated—and this is in our public filings—that between 3% and 4% of the accounts on Facebook at any one time are fake.

We make strenuous efforts to try to remove them and reduce that number. We have no business interest—in fact, it is entirely contrary to our business interest—if that number increases. Think about what you are sharing on Facebook. It is your family, your friends and personal information. If that number increases and people start worrying about who they are sharing with, that is completely contrary to our business interest.

Alessandro Molon: In Brazil, as our friend from Argentina mentioned, we were especially impacted by WhatsApp in the most recent elections, and just one month ago, it was widely used to spread manipulated content and fake news in the elections. During the process, WhatsApp banned more than 100,000 accounts in Brazil. What is Facebook doing to prevent WhatsApp from becoming a master simulation centre for fake news?

Richard Allan: We are now building WhatsApp into our thinking about election integrity, and that is obviously particularly relevant for countries such as Brazil or India, which is a very large WhatsApp-using country, and for more and more countries around the world. I think we need a pretty open discussion with policy makers about this. There is something quite different, I think, between regulation that affects a public space, like most of Facebook—Facebook pages are public, we host the content and it’s out there—and something that regulates people’s private one-to-one communications, or even small group communications. There are some novel challenges that we need to look at, but we do not want any of our services to be used for that kind of manipulative behaviour. We don’t think that is in the interests of the people who use the service, or the service itself. It doesn’t build trust. We want people to be able to communicate freely and openly; we don’t want our services to be manipulated by people inappropriately.

Q4271  Chair: Inese Lībiņa-Egnere.

Dr Lībiņa-Egnere: Being one of the latest speakers is never easy, but I represent the Latvian Parliament—you already gave a Latvian example. We take these risks seriously, knowing that our eastern neighbour, Russia, will be interested in the elections, and also knowing the geopolitical situation, and we are building a special set up within Latvian institutions, NGOs and media, to work to monitor the content of Facebook. Our co-operation with Facebook was very good, but that was in a pre-election campaign, and on an everyday basis it is not possible to do that. It is a lot of work, and we also hope that dialogue and co-operation will take place more on Facebook’s side. You said before that you are working on an architecture for political advertisement, and on investment and infrastructure and people. What does that mean, for example, for Latvia and the Latvian language? How will it affect our co-operation in the future, and when will it start to be more on the side of Facebook, not only from the Government?

Richard Allan: We are doing two things that are designed deliberately to be helpful to countries like Latvia. One is that we have been deepening our partnership with external experts and organisations such as the Atlantic Council which has threat monitoring centres and networks of people including, I hope, in your country. It works across all countries at risk. Part of it is that we partner with the people who, as their full-time job, monitor threats such as the threats you are facing. That intelligence goes back into our systems. There is obviously a focus on elections, but we would like to extend that beyond election time to the rest of the time. The other thing we are doing is building up our own internal resources and understanding, and we are extending our language capabilities, starting by ensuring that we have people to look at things specifically. A lot of the hope over time is that we will also be able to build different languages into our artificial intelligence systems, and those are the things that will allow us to work at scale. If, for example, a particularly damaging meme is going out in the Latvian language, the best way for us to spot that will be by using technology that understands that in that language those terms are a problem, and then surfaces it to someone who can deal with it. Those are the areas that I hope are helpful. I agree entirely that we need to move from this being an election issue to an all-year-round issue.

Q4272  Chair: We are running out of time, but two Committee members indicated that they wanted to ask one final question at the close. Can I ask that it is limited to a single question? I will call Catherine first, and then Charlie Angus.

Catherine Morin-Desailly: You have mentioned France twice, arguing that you had a new co-operation with the French Government about data discretion. In fact, let us admit, you had no choice if you wanted to restore your reputation and trust before the jury of public opinion, not only in our country but throughout the world. But that was before 14 November and before the new revelations, especially the fact that Mark Zuckerberg and some senior executives knew and had been warned about what could happen, the Russians having already interfered on social networks. How can we restore trust in those circumstances? You can make superficial promises, all right, but in fact the problem is deeper.

I also want to draw attention to the economic model of the internet—are you prepared to think about re-founding your economic model? We already know too well that fake news is of course due to political strategies and attempts to destabilise some countries, intervening in political complaints, but it also generates huge profits. This is it, one must admit, and it leads you to harvest more and more data from more and more people, in exchange for publicity and services, so there is a sort of vicious circle. Are you prepared—really, seriously—to work on re-founding this model?

Richard Allan: On your first question, on trust, I just want to clarify our expectations for the project that we are working on with the French Government. We will work with French regulators to share a lot more information about what we do in order to build trust. At the end of it, they will tell you whether they agree that we are a trustworthy organisation. I just want to be clear that this is not fake; it is real that we will be sharing a lot of information with them, and they will share that with you and the rest of the world.

Catherine Morin-Desailly: You are obliged to—you have no choice, if you want to restore your reputation.

Richard Allan: I am not going to disagree.

On the business model, this question is raised a lot. In an ideal world, people would give you money merely for the service—a system in which you have millions and millions of subscriptions coming in every month would be ideal. In reality, we have found that for the way in which most people want to consume internet services, they accept a model that the service is free at the point of use and they receive a certain amount of advertising. Our interest is in high-quality content and high-quality advertising. If the advertising or the content is junk, that is not sustainable—people will not stay with your service. We do not think that the advertising model will go away—not just for our service, but generally across the internet, that is the bargain that people are comfortable with—so our job is to make sure that the content and the advertising are as high quality as possible.

Q4273  Chair: The final question is from Charlie Angus.

Charlie Angus: I thank all my colleagues for allowing me to start off and to wrap up. I want to wrap up on a hopeful note, because I was so pleased with our friends from Ireland who got you to say how much you believe in regulation—we all certainly agree. But what we are regulating, in what seems to be your view and that of your friend Mr Zuckerberg, are the symptoms. It is easy to say, “We’ll deal with fake news and misinformation,” but you are the arbiter right now of the news cycle around the world, because of your video metrics. What we learned in 2014 is that you became aware that they were highly inflated and did nothing. You may say you are on a learning path, a journey, and you may get back to us, but I would consider that corporate fraud on a massive scale. If you are getting news organisations and selling them a model of video metrics that are false—over-promoting it—that is a form of corporate fraud. The problem that we have with Facebook is that there is never accountability, so I would put it to you that when we talk about regulation, perhaps the best regulation would be antitrust? People who don’t like Facebook could go to WhatsApp—but we have problems in South America and Africa, so we have to go back to Mr Zuckerberg, who is not here. My daughters get off Facebook and go to Instagram, but that is now controlled by Facebook. Perhaps the simplest form of regulation would be to break Facebook up or to treat it as a utility, so that we can all feel that when we talk about regulation we are talking about allowing competition and counting metrics that are honest and true. Facebook has broken so much trust that to allow you to simply gobble up every form of competition is probably not in the public interest. When we talk about regulation, would you be interested in asking your friend Mr Zuckerberg if we should have a discussion about antitrust?

Richard Allan: It depends on the problem we are trying to solve. If the challenge is around—

Charlie Angus: What if the problem is Facebook? That is the problem. We are talking about symptoms, but the problem is the unprecedented economic control of every form of social discourse and communication by Facebook. That is the problem we need to address.

Richard Allan: Unless you are going to turn off the internet, I am not confident that the people who we serve and you serve would be better off in a world where Facebook is not able, however imperfectly, to offer services where we have spent 15 years learning how to do it. If we started all over again with a different set of services—

Charlie Angus: We are not being luddites here. We are just saying that we need some level of corporate accountability. I am not saying we are turning the internet off; I am saying that perhaps, given Facebook’s unwillingness to be accountable to the international body and legislators around the world, antitrust would help us to ensure that we get credible, democratic responses from a corporation.

Chair: If I can just say a final word, we would also distinguish between the internet and Facebook and say they are not necessarily the same thing. Thank you very much, Lord Allan, for your evidence this morning. The Committee is now breaking and will sit again at 3.30 pm, when we will take evidence from the UK Information Commissioner.