HoC 85mm(Green).tif

Home Affairs Committee

Oral evidence: Fraud, HC 125

Wednesday 7 February 2024

Ordered by the House of Commons to be published on 7 February 2024.

Watch the meeting

Members present: Dame Diana Johnson (Chair); Lee Anderson; Kim Johnson; Marco Longhi; Tim Loughton; Alison Thewliss.

Questions 149-295

Witnesses

I: Paul Davis, Financial Crime Prevention Director, TSB Bank, Chris Ainsley, Head of Fraud Risk Management, Santander, and Woody Malouf, Global Head of Financial Crime, Revolut.

II: Philip Milton, Public Policy Manager, Meta, Alex Towers, Director of Policy and Public Affairs, BT Group, and Simon Staffell, Government Affairs Director, Microsoft.


Examination of Witnesses

Witnesses: Paul Davis, Chris Ainsley and Woody Malouf.

Q149       Chair: Good morning and welcome to the Home Affairs Select Committee. This morning we are carrying on our inquiry into fraud. The aims for the session this morning are to understand the ways that different sectors—today we are looking at banking and tech—are tackling fraud and protecting consumers. We aim to understand the impact of regulations and legislation around fraud across the different sectors, and to examine the effectiveness of the Government’s counter-fraud policies, including the fraud strategy and the online fraud charter. Our first panel this morning is very welcome. Will you introduce yourselves, starting with Mr Davis?

Paul Davis: I am Paul Davis, Fraud and Financial Crime Prevention Director at TSB Bank.

Chris Ainsley: I am Chris Ainsley, Head of Fraud Risk Management at Santander UK.

Woody Malouf: I am Woody Malouf, Global Head of Financial Crime at Revolut.

Q150       Chair: Thank you. I would like to ask a general question to start us off. Could you explain to the Committee what types of fraud occur through your banking services? How might that have changed over recent times? And what do you expect to see in future? Mr Davis, will you start us off?

Paul Davis: Thank you for running this inquiry dealing with such an important topic. It is one that TSB treats as a high priority. Indeed, for five years TSB believes we have been a leader in this space as a result of offering our fraud refund guarantee, which commits to reimbursing all innocent victims of fraud. The reason we launched that links to your question.

I have worked in this space for approaching 20 years. When I started most fraud happened through things like cheques and cash, and card fraud was in its infancy. What is quite profound is that in that time fraud has been transformed. It is now much more sophisticated. Criminals use techniques involving social engineering much more. We have seen the growth of what is called “authorised fraud”, where the customer, the consumer, is involved in some way and is tricked into often sending money. This, I think, is a real plague on the UK economy. It has a huge impact on victims psychologically and financially.

These authorised frauds are now one of the main types of fraud that we see. It was to provide protection for UK consumers that TSB took the brave decision five years ago to launch our refund guarantee.

Q151       Chair: Just a very small question. In terms of people being tricked into sending money, how many of those instances come from social media platforms? What is the split?

Paul Davis: The insights that TSB gets from our customers when they report fraud tell us that the majority of this fraud begins on social media channels. The three main types of fraud that impact our customers are purchase scams—they are the most common, although they tend to be lower value. Investment scams are a bit less common, but tend to be very high value. And there are impersonation scams. For those three types, which, as I say, are the main ones, we see about 80% start on social media. Let’s not walk past the fact that when I say “social media”, the majority of them start from the channels owned by one company in particular, which is Meta.

Q152       Chair: Okay. Is that predominantly Facebook and Facebook Marketplace?

Paul Davis: For purchase scams, we find that Facebook Marketplace is the main place where those scams originate.

Q153       Chair: Mr Ainsley, is there anything you would like to add to that? Could you tell me why Santander is only at 54% of reimbursement, compared with, as we have just heard, 94% at TSB?

Chris Ainsley: I will cover the reimbursement question first. Santander are a founding member of the APP CRM. After the super-complaint in 2016, the PSR’s investigation into push payment frauds—

Chair: Just because there are a lot of acronyms there, could you spell out exactly what you are talking about? We are not all experts in this.

Chris Ainsley: Not a problem at all. The authorised push payment fraud contingent reimbursement model was set up originally by nine organisations in response to the Payment Systems Regulator investigation. Essentially, it forms a contingent reimbursement model for customers who have suffered this kind of fraud, and it gives a balance between the two organisations that are involved: the payment institutions that have been banking the victim and banking the fraudster. Essentially, at that point in time, it was the most appropriate mechanism for most of the large organisations to join. With the PSR’s further work, that has now developed into what is essentially a mandatory reimbursement scheme that will be applied across the board in October this year.

Q154       Chair: Yes, but why are you only at 54%?

Chris Ainsley: Around 80% of all customers will get some form of refund. Obviously, we treat all those cases on a case-by-case basis under the policies that we set up under the APP CRM.

Q155       Chair: Okay. I still don’t quite understand how TSB can be at 94% and you are at 54%.

Chris Ainsley: TSB refund all their customers who are the innocent victims of fraud. We treat cases under the APP CRM, which includes the requisite level of care taken by the customer and the recipient bank.

Q156       Chair: Oh, so where the customer has not exercised due care, you take that into account and do not refund, whereas TSB do.

Chris Ainsley: There will be situations, on a case-by-case basis, where we assess the warnings we may have given the customer. The situation of the fraud is taken into account as well.

Q157       Chair: It is not very good for customers, is it? Mr Malouf, what would you like to say? You are slightly different, because you are an electronic money institution.

Woody Malouf: That is correct. I will comment first on the types of fraud that we see. It is similar to what Paul said earlier: we see a combination of authorised and unauthorised fraud. Effectively, unauthorised is where a customer does not consent or push the payment or pull the payment. That contributes to the vast majority of the fraud we see. However, it is of significantly lower value because of the controls we have in place.

We then see authorised fraud scams, effectively. This has been a growing problem across the UK. It has been a growing problem globally; my remit is global. This is where customers consent to pushing the funds across. They are social engineered—they are tricked into parting with their funds. Over the past few years, like most institutions, we have tightened a lot of the controls around these types of scams. Interestingly enough, we have seen that they have become significantly more complex with time, so the chains of fraud have become longer. What that means is that funds do not move across one account; they move across multiple accounts under the control of the customer. What we see is a new type of fraud called the triangle scam. It is where our customer would receive funds from a victim, but our customer is not the fraudster. The fraudster would have received goods in lieu of the funds that have moved across. We are receiving fraudulent funds, but our customer is not a fraudster; the fraudster has made away with the goods. That is something we have seen emerge over the last six to 12 months in anger.

We have also seen a move towards physical theft, at least in the past year to 18 months. Many of us use iPhones or Google Pay. Fraudsters—I say fraudsters, but really they are criminal gangs—are now operating in public places. They are stealing devices, getting access to all the codes because they are observing the victim, and then getting access to a large number of accounts on the victim’s device. We have seen that increase over time.

Q158       Chair: Sorry, you are saying that people are physically nicking phones, but before they nick it they are watching the person use the phone, so they can see what the person is putting in as a security code? You are saying that is increasing?

Woody Malouf: Absolutely. There are specific, local hotspots where these gangs operate.

Q159       Tim Loughton: Mr Davis, could you describe a typical fraud through Facebook Marketplace? How does the system work, and how do you pick it up?

Paul Davis: There are a number of ways it can occur. The most common is when someone is looking to buy something, and they will browse Facebook Marketplace to see the items available for sale, message the seller, and decide to buy that item. Facebook Marketplace does not have a payments channel attached to it. It is not like a website you might use to buy something, such as a high street shop or Amazon. You have to get bank details from the seller of that item. Our customers tell us that they get bank details from the seller, send them money, and then the item never turns up. We see this impacting all kinds of items—you name it. Trainers, electrical goods, games consoles, pets, cars, caravans—literally anything. You pay for an item, and it does not turn up.

Another type of scam can impact those who are trying to sell something on Facebook Marketplace. I have done it myself, where I have posted an item for sale, and almost immediately I am bombarded by people supposedly wanting to buy it. They initially express a genuine interest, but over time those conversations develop into all kinds of weird things, where they need me to pay for insurance or they ask me to hand the item over to a delivery driver and the cash will follow later. In fact, we recently did some work in my team where we used our personal profiles on Facebook to engage with 100 sellers on Facebook Marketplace. Our assessment was that about one third of them were probably fraudulent. We published that as a press release a few weeks ago.

Q160       Tim Loughton: I do not use Facebook Marketplace; I just go directly onto a website. For example, this week I have been in the market for a rotavator for my vegetable patch, which is a very important purchase. I messaged a company directly for their details, they came back, and I will buy that directly from their website. Is that much safer than if I go onto Facebook Marketplace?

Paul Davis: The advice I always give is that if you are shopping online, you want to be paying by card. First, if the seller will accept a card, that is a good sign that they are a genuine seller. Secondly, if something does go wrong, you get a lot of protection, as standard in law, from your card issuer. Don’t get me wrong, things can still go wrong in that transaction, but we certainly don’t see our customers reporting fraud at anything like the rate for purchasers on Facebook Marketplace.

Q161       Tim Loughton: So why would people use Facebook Marketplace?

Paul Davis: Facebook Marketplace is designed as a local collection service. A lot of the items are free, and some of them are in your local area. It can be a good way for people to get rid of unwanted items and give them to someone who does want them. Unfortunately, it is also very attractive to criminals, and they have seemingly infiltrated that platform to quite a high extent, with an intent to defraud consumers.

Q162       Tim Loughton: Are you convinced that Facebook is taking that seriously and doing anything rigorously to defend that site?

Paul Davis: There are a few things I would say on that. First, we at TSB are really encouraged by the tech charter, which was launched at the end of last year. If implemented, I think the provisions within that charter will make a significant difference to the scale of this fraud. The timescale in the charter for those provisions to be implemented is six months, and we are still in that six-month time period. For me, it is a waiting game to see that these new provisions and these new commitments from the signatories to that charter will indeed make a difference.

I have met Meta. We wrote to them, outlining our concerns. We were very pleased that they came to our offices to talk about those concerns. My view on that is similar, which is that they have talked about actions that they are taking, but my hope is that they will now translate that into action, with less risk for UK consumers.

Q163       Tim Loughton: Sorry to focus on you, but you obviously did a plug for your company having the best record in reimbursement. In your experience, has that led to more claims of fraud than you might have expected, or are people being more careful, or are your systems better? If I were a client of TSB—I am not—and I could recklessly go about Facebook Marketplace placing orders left, right and centre without really checking their validity, on the basis that I will get a 100% refund from TSB, it might lead to less responsible behaviour. Is that what has actually happened, or not?

Paul Davis: We launched the refund guarantee for a number of reasons, first because we thought that it was the right thing to do for victims of what can be a devastating crime. Secondly, though, we think it is the right thing for our business. I often say that when you don’t refund a victim of fraud, the cost is not zero. There can be lots of costly knock-on consequences, including getting more complaints, having more cases at the ombudsman and having to spend much more time as a bank talking about how you didn’t refund that customer as opposed to working to stop fraud happening.

However, I should be clear that the refund guarantee is not a blank cheque. We made a commitment to refund all innocent victims of fraud and I have a team working for me of skilled fraud analysts who look at each claim to assess whether it is a genuine one, and we pay out when we are satisfied of that fact. And we tend to reimburse about 80% within a five-day period.

So, while we are very mindful of that risk that you mention, it is something that we believe we have the controls to mitigate. We haven’t seen an uptick either in our customers taking less care or making more claims. Frankly, I think that’s because nobody wants to be a victim of fraud in this space and when consumers are at the point of making payments, they are not really thinking about being scammed; if they were, they probably wouldn’t make the payment.   

Q164       Tim Loughton: Sure. How do you fund that full reimbursement? Is it self-funded, or have you got insurance schemes to cover it?

Paul Davis: Self-funded.

Q165       Tim Loughton: Entirely self-funded? So, what are the rough figures? You may not want to give the actual figures, for commercial reasons, but how big a percentage of your turnover has your refund claims amounted to?

Paul Davis: We tend not to talk about absolute values, as you say.

Tim Loughton: I am sure.

Paul Davis: We do spend a lot of time benchmarking our losses against other banks. UK Finance, our trade body, publishes really helpful statistics and we monitor them closely. TSB tends to have about 2% to 3% of all industry fraud losses of this type, which is less than the number of accounts that we have.

My team—the fraud prevention team—in total spends about a fifteenth of TSB’s total cost base, and of that typically around a little under half to around a half will be on our fraud refunds. But as I say, I don’t think that you can just focus in on the cost of reimbursement. You’ve got to look at the total cost, because when you don’t refund the victim—

Q166       Tim Loughton: I understand that, but I’m just trying to understand something. Mr Ainsley or Mr Malouf, on the basis that Mr Davis’s company is more generous in terms of honouring claims of genuine fraud—I’m not interested in scam claims from people claiming they’ve been victims—and it does not appear to cost TSB an appreciable amount more, and there are some gains in terms of good governance and not having all the other paraphernalia to go with it, why are you not all following his example before it becomes mandatory?

Chris Ainsley: From our perspective, we needed to be part of a scheme that covered multiple large organisations, which is why we joined the APP CRM.

I think the other point that we need to remember is that our focus as part of that protocol is to prevent as many frauds as possible. Our key aim, if I can respond to what Paul has just been talking about with Facebook, is to try to make sure that at the point at which a customer is making their payment, we can give them the appropriate warnings. That has been our real aim in this fraud world for about the past six or seven years.

If I can bring out a specific thing that we have published in relation to social media, we have changed some of our online protocols when you are making a payment. Essentially, if you were purchasing your rotavator from Facebook Marketplace, for example, we would ask you what you were buying, where you were buying it from and specifically give you a warning if it was Facebook Marketplace to say that you should pay once you have seen the item in person and physically touched it, because all the claims we have had—and we managed nearly 20,000 purchase scams last year at Santander—related to customers who had never seen the item that was basically bought on a site meant for local purchases.

Q167       Tim Loughton: Do you think you are preventing more frauds than Mr Davis’s firm is?

Chris Ainsley: We are different organisations. Our main—

Q168       Tim Loughton: I know that, but you are both banks. Do you think that you are preventing more frauds than TSB with your approach?

Chris Ainsley: I do not know the figure of our prevention capabilities compared with each other, but we prevent more than 60% of all our APP fraud at either the point of payment or the point of warning.

Q169       Tim Loughton: You have just talked about benchmarking against other banks. Surely how well you are doing on preventing fraud, which is a major cost drain to your bank, is quite an important figure to have.

Chris Ainsley: Correct. We benchmark our losses between each other. For some of that we can see which organisation it is and for some of it we cannot. Our performance is around 10% of the market overall, comparative with what Paul mentioned.

We are all in the same risk environment, as you say, and therefore we are all doing our best to prevent that. I do not believe that reimbursement will be the thing that will solve this particular problem; it will drive the right mitigant, shall we say, to those parties involved. One of the key things must be that a payment from a victim always goes to a fraudster, an account held at an organisation that has essentially banked the criminal, and then as part of that process we need to ensure that that is taken into account as well.

Q170       Tim Loughton: I had a constituent who was scammed out of quite a large amount of money for supposedly buying a flash car. One thing I do not understand is that that money must go into a bank account. It is quite difficult to set up a bank account in this country—if you are an MP, it is almost impossible these days—because for money laundering purposes you must prove all sorts of things.

However, fraudsters appear to be able to set up bank accounts into which money is paid; the victim pays that money in and then it is taken out straight away. Should it not therefore be quite easy to track down the beneficiary of that bank account on the basis that you have had to do a lot of due diligence before you allowed that person to set up the bank account in the first place? How are they getting away with it?

Chris Ainsley: I would say that in most cases we do not have a significant problem with what we call money mules—those people who have received the money. In most of those cases those accounts have been set up with genuine identities, so when we look at those cases those customers have allowed their accounts to be used by organised criminals—that is one of the core problems we see, especially with younger customers—or there may be situations in which that person has set up the account entirely to use it as a fraudulent recipient. However, actually getting the police to take action on that criminal is difficult at the moment.

Q171       Tim Loughton: How have you allowed them to set up an account? If they are not a mule, one understands that, although we have heard that a lot of students and younger people might effectively allow their accounts to be used. However, that would be suspicious activity if all of a sudden they have large dollops of money going into their account where a student normally may have only a small amount of money.

Would that be picked up as irregular activity and would questions be asked on a quasi-unexplained wealth order basis? I still do not understand how people who have set up these accounts fraudulently to receive large amounts of money cannot be easily traced. You are just saying that you tell the police about it, but the police do nothing.

Chris Ainsley: If I take that first question, in terms of the inbound monitoring we have been doing that for four or five years very specifically for APP fraud. We generate many hundreds of alerts a day to look at those payments that come into our accounts, and we do exactly as you have said; we will query if the payment looks unusual for that customer.

On your second point, if an account has been opened and all the right customer documentation and ID has been followed and the assumption is that the account is genuine, we would then need to ensure that the police could follow up on that case. As we all know, a very small percentage of these cases are taken on actively by law enforcement and managed through to the courts.

Q172       Chair: Thank you. Can I ask a question about the fraud protection team? Mr Davis, you mentioned that. Do you have a separate money laundering team as well?

Paul Davis: Teams at TSB responsible for fraud and financial crime prevention sit within my wider responsibility.

Q173       Chair: So they are all together.

Paul Davis: Correct.

Q174       Chair: Is that the same at Santander?

Chris Ainsley: We have specific fraud teams who look after customers and manage claims, for example, and separate teams to manage suspicious activity reporting. There is a crossover in their activity for things like money mules.

Q175       Chair: What about at Revolut?

Woody Malouf: They are all under my remit. We cover the two together. Of course, we have separate teams, but they are all managed in the same organisation.

Q176       Kim Johnson: Good morning, panel. Paul, you mentioned that fraudsters are becoming far more sophisticated. I would be interested to know what resources your bank has invested in combating fraud, and how long it takes to investigate fraud and provide reimbursement.

Paul Davis: Preventing fraud from occurring is a top priority for TSB. We use a range of different technologies and control types and hundreds of people to protect our customers from the risks of fraud. You are right that fraud has evolved to become much more sophisticated. We find that a lot of our customers have been sometimes heavily socially engineered by such criminals.

Q177       Kim Johnson: Can you say a bit about what you mean by social engineering?

Paul Davis: Sure. I saw a case the other day. Someone who had been a customer of ours for over 30 years had fallen victim to a romance scam. They had met the person online and had spoken to them by video. They had had a number of conversations. The perpetrator had a complex back story and tricked the victim using psychological techniques to make them believe that they were someone they were not and to convince them to hand over their cash. The fraudsters use complex techniques to convince people to do things that they later deeply regret. We see that as a common theme across many scam types.

Q178       Kim Johnson: Some would suggest that this is becoming a bit of an epidemic, particularly romance fraud. As a bank, if you see it happening, what level of communication do you have with customers to alert them to the fact that this could be romance fraud and to try to get them not to give money away?

Paul Davis: That is one of the main remits of my team. We have been focusing on it and doing a lot of training for our teams to upskill them on how to do that even more effectively. We find that it becomes increasingly challenging for us to counteract the methods that the criminals are using.

Q179       Kim Johnson: From the work that you have done in your bank, can you say whether the criminals are part of organised gangs and crime groups or whether they are just individuals—chancers—trying to make some money?

Paul Davis: Any evidence I have comes from speaking to our victims, so I have only a partial view of how these networks operate, but it is clear that the criminals are not lone actors. This is organised fraud. Criminals specialise in different aspects of the organised fraud, and work together, in a similar way to how a corporate or a company would in the UK, to steal our cash. It is clear that they operate around the world. It is not something that a lone individual can do in their bedroom in the UK.

Q180       Kim Johnson: Thanks, Paul.

Chris, you mentioned prevention. It has been suggested that this type of fraud is at epidemic levels. I am curious about the work your bank does, for example with the police and Action Fraud. Do those relationships need to be improved?

Chris Ainsley: I will start with the work with the police. There is a dedicated card and payment crime unit in the City of London police, with which we have done a huge amount of work. I have worked in the industry for as long as Paul; one of my first jobs was to help set up the banking side of that unit. We have a break the spell team, which is very similar to the team that Paul was talking about, whose job it is to take on those much more complex cases.

We talked about giving someone a warning about buying something on Facebook. That is relatively simple to do: “Don’t do it,” or, as Paul says, “Use a card.” There will be situations—romance scams are an excellent example, because they take place over time, along with some of the investment frauds—where people are almost too scared to stop giving their money because they are going to lose it. They make multiple payments over time to all sorts of different investment scams. We have a good record of getting those into certain police units where we can almost push them through the process to make sure that they are taken into account. Or we can wrap those cases together, as you pointed out, as organised crime.

On a case-by-case basis, though, the way in which those are dealt with by law enforcement is not something that we can really imagine. It could be almost every single case investigated. That really is one of the core problems that we have: that ability to assess each of those cases, wrap that intelligence together so the police can see those networks as well as we can and almost combine some of our data as well to do that. That is something that really needs to be improved in the industry to get that view of the anatomy of the fraud.

Q181       Kim Johnson: Would you say that this type of crime is underreported because of the psychological damage and some elements of shame in terms of being scammed at this level? Are you seeing any examples of that?

Chris Ainsley: I believe that it is very difficult for me to answer the question because we see those people reporting their frauds to us in different ways. If someone has had their card lost or stolen, that is not going to be something that someone would think twice about reporting to the bank. I also feel that on a lot of the scams that we see where someone has lost almost all of their money and they almost know they were dealing with a fraudster, we will see most of those people come to us and tell us.

Romance fraud is a very good example. They feel emotionally damaged by that and there is a huge amount of that fraud that is a problem. I do not believe all of those will be reported to us. There might also be vulnerabilities with those people that would cause them not to come to the bank. As part of the process we have, we have had conversations.

Some of those have been published in the press to give examples. They just do not admit that they have been the victim of the fraud until they might have had 10 or 12 conversations with us. That might even include us proving that the pictures they have of the person are not real by showing them their stock images off the internet and that kind of thing. So I certainly do not believe they are all being reported, no.

Q182       Kim Johnson: I know AI is increasingly being used. Woody, you might want to respond in terms of how information is circulated to customers to raise awareness of these types of scams. Being forewarned is forearmed, isn’t it?

Woody Malouf: Absolutely. The vast majority of our emphasis is to intervene at the time of the transaction because we believe it is most effective. We have been piloting different interventions using media that resonate more with our customer base—social media and things like that—to warn them about some of the most common tactics they would see. However, we find it is most effective at the time of the transaction because that is the point at which we have the most information and we can actually provide the most targeted intervention.

The simplest way to think of fraud is that you have to detect it and then you have to intervene to break the spell, especially when it comes to authorised fraud. Breaking the spell is where we find the biggest challenge these days because of the increased sophistication of the types of scams that our customers are seeing.

Kim Johnson: Thank you. Chair, those are all my questions.

Q183       Marco Longhi: I am quite impressed with the TSB approach to this, but I have a genuine concern around systems effectively just guaranteeing everything for consumers and users. Although that can give peace of mind, I genuinely struggle with this notion that unless all people are prepared to take some measure of accountability of what they do, we are potentially fuelling a greater increase in potential fraud occurring again and again and again.

I think you said earlier that the data you have does not actually show that, but I would have thought it would have been human nature for people to just think, “Well, it is okay for me to do whatever I want” and just carry on. Wouldn't you say that is the case and that it would be good for people to feel that they need a measure of ownership of the risks they are taking and understand them?

Paul Davis: I share your concern. The purpose behind our fraud refund guarantee is not to do that. For fraud, the things consumers need to do to keep themselves safe are relatively intangible. It is not like burglary or car theft, where we all get the things we have to do and it is common sense. We can touch our keys and feel the locks and know when something is secure. Fraud is different. It is a much more emotional crime, particularly today with how much social engineering happens.

In the main, when consumers make payments, think about buying something online or talk to someone on a dating site, they are not thinking about fraud. For that reason, I think our fraud refund guarantee provides appropriate protection. It provides refunds to innocent victims, but it is not a blank cheque. To ensure people are eligible under our refund guarantee, clearly we investigate cases to confirm that customers haven’t strayed too far. As you have seen from the statistics, the net result of that is that we reimburse well over 90% of claims. We think that is the right thing for victims overall.

Q184       Marco Longhi: Chris, did you want to come in on that?

Chris Ainsley: One of the core things at the heart of what Paul was talking about is that there is an ecosystem. The protections when you buy something on your card are embedded in a system that involves the merchant or the store you are using the card at and the banks that are involved in the process. That didn’t exist in any way, shape or form for our faster payments system—bill payments or whatever you want to call them—where we change money between two accounts. We needed to bring some consumer protection into that particular payment scheme, and that is what the Payment Systems Regulator has been focused on.

We need to remember that it is not just a payments problem. The payment exists when you go to the bank and it moves through the system but, as we have said, the fraud does not start when you open your banking app. There is something already happening to you. You might have got into a relationship with a criminal, you might have seen an investment online or you might simply be trying to buy something. That payment may not be backed by a scheme that protects the customer if it is almost like me giving you £10 in cash, but it is embedded in a system. It is part of the bank making the payment and part of what you have done on social media.

We need to bring some of those other people in the ecosystem into the prevention sphere. We need to be able to say that the platforms need to take action to reduce the number of scams that are posted, the number of adverts posting cryptocurrency scams and the number of cars that are posted.

Like Paul, we have done a lot of research—for example, we have published things about cars on Facebook Marketplace. In just half an hour, my team found 4,000 fraudulent adverts for cars. We went to the press about that because it was so significant. For almost every single vehicle you could search for, you would find a scam ad at the top of the list. It is not just about what happens when the people come to pay for the vehicle; something has to happen further up the chain to prevent that fraud from taking place. Then it is not just about people taking risks, because the risk will not be there for them to take.

Q185       Marco Longhi: Woody, do you want to contribute any further?

Woody Malouf: I agree with what they both said. One point that I want to make is that it would be very useful to have a simple-to-understand expectation on the consumer so that we can protect them. We try to encourage our customers to be as truthful as possible when we are going through any type of intervention so we can give them the most effective intervention. That is one thing that we always seek to do in any of our interventions. The thing about fraud is that, although of course you have the financial loss, there is more to it. There is an emotional element to it too. That is why the vast majority of our focus is on preventing it in the first place.

Q186       Lee Anderson: Paul, you said earlier that 90% of people get their money back if they have been a victim of fraud. That may seem the right and decent thing to do, but where does that money come from?

Paul Davis: It is funded by TSB from our revenue, and it ultimately comes off our profits.

Q187       Lee Anderson: So it comes from the customer.

Paul Davis: It comes from TSB’s profits.

Q188       Lee Anderson: It comes from the customer.

Paul Davis: Yes.

Q189       Lee Anderson: Okay. I think that is a little unfair. I will move on to a case that I had in my constituency. An old lady, who was maybe 90, kept receiving scam phone calls to say that there was some fraud going off in her local branch, and that she had to remove all her money from her account and transfer it to this other account. She was also told by the scammer, who kept ringing, week after week, that everybody who worked in that branch was corrupt, so she could not trust anybody in the branch—the teller, the manager, anybody.

She had to go down to the bank with this dodgy sort code and bank account number and ask to transfer £10,000, I think, to this account, wherever it was. The teller, the cashier, said it was a scam; the bank manager had her in and said it was a scam; but she had got it in her head that they were all part of this plot and committing fraud. Eventually, she transferred the money, and there was nothing that the bank manager could do. Since then, she has got half the money back—but that is customers’ money that she got back.

It struck me that there was an easy way to stop that happening. If vulnerable or older people—it is normally the vulnerable and the elderly who get scammed, as I have seen in my own surgery—had a counter-signature on their account to authorise a transfer, such as a family member, a son, a daughter, someone trusted, or even the bank manager, they could have stopped it. Do you agree?

Paul Davis: I have seen lots of similar cases, and I agree that they are distressing. It is something that we have been working hard on with our branch teams. We also partner with the police on something called Banking Protocol, which is incredibly effective at stopping that type of scam, because the police effectively provide an emergency response to visit the branch. In those cases, we find that the police turning up can be what jolts the victim out of the state that they have been put in by the criminal.

The point you make about account customers being able to have an extra person added to their account is an important one. That would not be unhelpful, but I think a constraint on its usefulness would be getting the word out that a service like that exists for customers. We effectively offer it today by offering powers of attorney and of course joint account holders; we see some take-up of that from our customers, but not much. If there were more, it would stop more fraud from occurring.

Chris Ainsley: We have been working on what we might call almost a second-party notification protocol, which we have used to good effect. Mainly, that is to do with other forms of vulnerability, not directly to do with fraud. Similarly, obviously, we offer joint accounts, Banking Protocol and the things that Paul mentioned.

One interesting thing, which I mentioned earlier, is our break the spell team. One of the key things they do for customers who are in similar circumstances is to involve third parties where we can. Often we cannot have a conversation with the customer, because they have been told that everyone they speak to in the bank is involved in this fraud, and sometimes the caller pretends to be from the police or suchlike.

Sometimes that is difficult for us from a legal perspective, but involving a family member or someone else we can see is connected to the customer is one of the key tools we have. As Paul says, trying to get that registered with the customer in the right fashion ahead of time will always be our key challenge, but it certainly is something we are working on.

Woody Malouf: That is something we have considered internally—how to provide that by digital means. There are a lot of considerations about consent and providing access by various people to another person’s funds, which make it quite complex to operationalise. Back to Chris’s point, by far the most effective thing, or where we put most of our investment in breaking the spell, is involving third parties. We are looking at pilots to involve third parties wherever we can, with data-sharing agreements and that aspect of course considered.

Q190       Alison Thewliss: To pick up on that, it occurs to me that your three banks are quite different in terms of customer profile. Is there something that you in Revolut have done as a new start-up to go at this from the start and ask how to safeguard customers in a way that might be different from what TSB and Santander do?

Woody Malouf: I cannot comment on TSB and Santander of course, but—

Alison Thewliss: I just wondered if it is part of your thinking—

Woody Malouf: It absolutely is. We invest very heavily in machine learning—AI, effectively—to protect our customers. We have taken a machine-learning-first approach, because we believe it is much more effective at detecting these things and it responds more quickly to emerging trends. It will also protect us against the AI that is attacking us.

The best way to find a deepfake is with computer vision machine learning rather than necessarily a human, because it is designed to trick a human, so we have taken a machine learning AI-first approach, which has been enabled by the fact that we are a younger firm. We think about how we protect our customers as a product that we offer, and just like any other product, we measure its effectiveness and have key controls around it. We put a lot into the design of the controls to break the spell. We have interesting, innovative interventions whenever you make a payment, depending on the risk—things that are not traditionally offered because they are probably a little bit more complex to embed when you are not starting from scratch with that in mind.

Q191       Alison Thewliss: Is there any risk? If I wanted to come into a traditional bricks-and-mortar-type branch to set up an account, what safeguards do you put in place to prevent fraudsters or scammers from setting up accounts, because they do not need to speak to a human being in a bank branch?

Woody Malouf: Like every other financial institution here, we are bound by the same regulations. We go through quite a rigorous “know your customer” process, which requires identity verification. We use third-party data to profile the customer who is setting up the account, and we go through traditional means of doing it. Again, we also embed quite advanced machine learning in the process to profile the risk of our customers, rather than using more traditional rules, which are a bit simplistic. Like everything else, we measure the effectiveness of our controls at onboarding time. We want to avoid onboarding bad actors or fake people who “don’t need to speak to someone”. The risk is that they are not who they say they are, but we don’t really see that materialise much anymore. It is a relatively low rate compared with the number of people on board who we suspect of a money laundering offence or a fraud offence.

The bigger risk is people who then share the account—the money mules, effectively. That is a bigger problem that we have over here but, again, I don’t see that risk as being heightened, given the fact that we offer digital or remote onboarding, because you can set up an account in the branch and give access to the account to anyone else outside that. Having that physical link is not necessarily the protection, given the compensating controls we have put in place.

Q192       Alison Thewliss: Can I ask each of you how you identify unusual activity in accounts? To give an example, my husband’s bank account got emptied by a fraudulent transaction on a holiday six months previously. All of a sudden, there was a small transaction, a bigger transaction and an even bigger transaction before the transactions were stopped. How quickly can you identify that kind of suspicious activity coming from somewhere abroad in order to stop it going further?

Paul Davis: The challenge we face in detecting unusual activity is that normal activity often looks the same. Thinking about my own bank account, whilst it receives a salary, direct debits come out and I use it for groceries or whatever, from time to time I also do unusual things, like buy a car or pay a tradesman, for example. Those transactions can sometimes look pretty similar to those that are related to fraud, and that is the challenge for us. It is a needle-in-a-haystack-type problem. We do it using pretty smart technology. At TSB, we partner with a number of different external companies which provide technology based on machine learning artificial intelligence, using lots of computing power to try to spot those tell-tale patterns. We spot about two thirds of it. Clearly, we are always working to increase the rate of detection, but like I say, the key constraint is that it is inherently hard to spot.

Chris Ainsley: It is a very similar story, and it will be for most large organisations that process many millions of payments. What I would say is that, especially if you are a cards issuer and have been processing millions of card transactions, that data has always been a very rich source of being able to use machine learning. For 15-plus years, models that use what we would now start to talk about as AI to profile your card spend—Paul’s groceries versus Paul buying a car—have been in place. That would traditionally have triggered an alert to you to say, “Are you using your card in wherever today?” The big issue with that historically was that it was quite a binary conversation. It would be, “Hello. Have you just been in Tesco?” and you would say, “Yes,” and it was all good. If you said, “No,” there was something that needed to be actioned. Obviously, the thresholds for that can be amended depending on how many people you have or how you contact those customers.

In the APP fraud world, this push payment fraud, the customer has done most of those payments themselves. So you cannot, for example, tell the difference between a card that has a chip and PIN versus a card being used on the other side of the world with just a magnetic stripe. You have the customer using their own mobile phone that they have used for years, or using the computer at home, so you have to identify the unusual nature of the transaction, such as moving money to a third party for the first time, and you cannot have a binary conversation that asks, “Was that you?”

You need to be able to do more than just that unusual activity monitoring. You need to be able to say, “Why are you moving the money? How are you moving it? What is your intent with it when it goes where it is going? Have you started a relationship?” They are very different conversations for an organisation to have, from, “Are you using your card in Swindon this morning?” to, “Are you in a relationship with someone you have just met on the internet?” They are very different conversations for a bank to have, and obviously cannot be done in the same way as sending a text message saying, “Was this you?” which is very easy. You cannot do that with this fraud type to almost complete the circle of spotting that unusual activity, which is great, and then preventing the fraud that could be happening. It is really difficult, but we are, as Paul said, stopping a huge percentage of that before it ends up in the payment system.

Woody Malouf: There is one thing that I would add. Like most institutions, we assess the risk with every single transaction that leaves our customer’s account. We use machine learning, some of these tools, to differentiate between what we suspect might be an authorised payment versus an unauthorised scam, because the type of intervention we would do is very different. The one part that would massively help us is further enriching a lot of our tools with more data. Right now we are limited by the data that we have within our organisation. As a relatively newer company, a lot of it is cloud-based, so we have access to a lot more data, not just associated with the payment itself and the card, but about the customer, about their associations, so we can understand and profile that risk quite well. But what would massively help us even more is understanding the risk on the other side.

We have spoken about where the majority of fraud originates and getting data from there. What is the ad that you have clicked on? How long has that seller been there? That would massively help us, not just in detecting the unusual activity, but in the intervention itself. Like TSB and Santander, we do detect the vast majority of fraud that goes through us. The harder part, we find, is when we intervene.

Q193       Alison Thewliss: Moving on to that kind of intervention and what happens next, I recall we looked at this in the Treasury Select Committee when I was on that Committee, and there was lots of talk about suspicious activity reports and the countless numbers of those that were filed, and then what happens after that. Chris, you talked about the difficulties of handing things over to the police and then things getting taken up. Do you think that that is the missing link at the moment? All of this is going on, banks are out of pocket as a result, and those perpetrating the frauds are often not caught.

Chris Ainsley: I think that that is the end result. Fraud is 40% of all reported crime. Trying to arrest our way out of that problem is going to be exceptionally difficult—practically impossible. What is also practically impossible is for each of those organisations, a financial organisation of any type, to be able to solve that problem on their own. It is an ecosystem. A whole range of things are driving those frauds towards us. Some of that is based on the things that we give people, the cards, which can be stolen, and things like that. That is physically happening.

We need to be able to prevent more of that fraud in the first place so that the numbers that come through that process are much lower. With nearly 60% of all the frauds that we are dealing with being these purchase scams, where money is going between two people and that person was approached on social media, if we were able to reduce that by half, just for my organisation that would reduce the number of cases in a year by 7,000 or 8,000. We need to be able to find the best solution to reduce that from being a problem in the first place—reduce someone’s feeling that they have become the victim of something, but also reduce the financial impact across the whole of the system.

Q194       Alison Thewliss: Prevention is definitely an important part of this. I will ask again: do you think that there is enough done on the other side to catch those that have perpetrated these frauds against people?

Chris Ainsley: At the moment, if action is taken there are good results. One key problem is that we need to be able to connect the dots between many hundreds of cases, to give them to the law enforcement arms so they can take out in one fell swoop, for example, a gang of organised criminals causing those thousands of cases. There is just not enough of that happening.

Q195       Alison Thewliss: Is that a resourcing or an expertise issue on the law enforcement side?

Chris Ainsley: I cannot comment on the resourcing side; I am not an expert on that area. I would say that, over my 22 years in the industry, I have been involved in training police forces in certain types of financial technology. As Woody mentioned, I was involved when Apple Pay was being first rolled out, giving the police insight into how that technology worked.

There is a need for us to work more in step, to ensure they understand the intricacies of the financial part of the process, rather than just the criminal, because that could be true of complicated social engineering scams or cryptocurrency fraud, for example. There is a need for us to ensure that that knowledge is shared, and to understand the ins and outs of the crime itself.

Q196       Chair: You mentioned cryptocurrency, and we have not talked much about that. According to Lloyds Bank, there was a 23% increase in cryptocurrency fraud in 2023. I would like all of you to say something about cryptocurrency, how it occurs and how fraudsters are benefiting from it. Mr Malouf, do you want to start?

Woody Malouf: At Revolut, we see our customers moving funds to crypto exchanges. We also offer a crypto withdrawal service, where they can withdraw their own crypto currency on to the blockchain, within controlled limits. The vast majority of scams we see occur when they move their funds across to crypto exchanges. The way we see it typically is that they are reported as investment-type scams.

Of course, there has a large hype around cryptocurrency prices going up or moving down. People are convinced typically via fake ads, encouraging them to invest in this platform. They move their funds to the platform and further on to an address in the control of the fraudster or criminal group, imagining that they are getting returns. Of course, the returns do not exist. When they come to try to withdraw, that is when they realise there is a scam. The vast majority of cryptocurrency scams we see from our customers are those crypto investment scams.

Q197       Chair: Okay. And that is growing?

Woody Malouf: That has grown. The one thing to highlight with fraud is that different typologies may grow. It is possibly because it is topical. It might be the best place to put your fake ads on, such as, “Look at this really exciting new thing to invest your funds in.” It could also be because moving funds could go to crypto exchanges, which is a store of value, and it can move across with one transaction. The vast majority do start somewhere. It needs to have a hook, and that hook is always upstream, so that is where we see it.

Q198       Chair: Would either of you like to say anything about cryptocurrency?

Paul Davis: If I can start, TSB took the decision over two years ago to block all payments to cryptocurrency exchanges. We did that in response to incredibly high levels of fraud. My experience is now relatively historic because I have not allowed any of those payments for some time.

It is worth noting that, before we implemented that block, if our customers purchased some cryptocurrency via faster payments, one in five would ring us later and tell us that they had been defrauded, which is a staggering rate of fraud. If I were to take the rate of fraud for customers sending money to Santander—as Chris is sitting next to me—it would be thousands of times less than one in five payments being fraudulent.

I have only had one inquiry from any crypto platform questioning that strategy. I spoke with that company, which is based in the EU. One thing that struck me about that conversation with my counterparts who worked in fraud prevention was that when I asked them what tools they used to detect unusual behaviour and help their customers not fall victim to scams, they didn’t have any. I then realised it was a dumb question for me to ask. Of course, they don’t do any because that is what crypto is all about. It is decentralised, there is no central authority, and everybody is on their own. That is a really key factor in why the rate of fraud is so high. Chris, Woody I have said what steps we take to stop fraud happening to our customers, and the sense I got from that one conversation was that none of that happens within this crypto industry.

Chris Ainsley: Like TSB, in 2022 Santander publicly announced that we were restricting payments to cryptocurrency exchanges, rather than specifically to purchase cryptocurrency. We limited those transactions to £1,000 per payment and £3,000 in any calendar month, very specifically because, in 2022 and towards the end of 2021, we saw the rise that Lloyds described last year. The limit that we have placed has reduced the amount of fraud going to cryptocurrency by 97%. The value of that was estimated to have been tens of millions of pounds in a year, which would have gone to those cases.

What we have not hit on, though, is specifically how that fraud works. A consumer will see an advert. Most of those adverts are celebrity-backed cryptocurrency scams, usually on social media platforms. Specifically, the Meta ad library is the best place to find them. They will feature Elon Musk, Martin Lewis and various other people, and they will essentially put you through to a process where you will apply for a trading account that promises significant returns. The one we used in our example said you could make £9,200 in a week; that was the example that we put in the press.

The customer is then manipulated into setting up an account with a cryptocurrency exchange in their own name, probably going through KYC and giving across documentation. Then, they essentially move their money, in fiat currency, from their incumbent bank into that wallet, and then they will potentially be manipulated by the fraudster into turning it into crypto in various ways. That will then vanish off, wherever it goes. That is the exact way in which every single one of those cryptocurrency frauds has worked, apart from the occasional purchase scam in which someone thought they were buying crypto that did not exist.

It is formulaic. It is happening every time. When we look at those ads and where they are being placed, they usually begin on social media. When we look at the value of the fraud that comes in any one year from social media, those adverts take up a huge amount of that value, or they certainly were over the last couple of years.

Q199       Chair: Thank you. We are running out of time. I would have liked to put a few more questions to you, including particularly about a statistic we have heard—that over 70% of fraud originates from overseas. I wonder whether you might be able to write to us about how you deal with international partners and address the very large number of frauds coming from overseas.

Finally, could you each say what one thing you would like the Committee to recommend in our report? What would be the one thing that would help in this field? Mr Davis, would you like to start? Be very brief, because we have to move on.

Paul Davis: For me, the key thing that could make a difference here is greater action from the technology sector, and Meta in particular, to stop fraudulent content being put in front of UK consumers in the first place.

Chris Ainsley: That would have been my No. 1, so I will go for a different one. I think that what we need to recommend is a proper reporting process for the victims of crime. We have talked about the complexities of having suffered something that happened on Facebook, and having to go to my bank or maybe also to Action Fraud. We should really be looking at recommending a unified fraud reporting protocol that gives the best thing for a consumer when they become a victim.

Woody Malouf: Chris and Paul have given my No. 1 and No. 2, so I think I will just add to what Paul said. Specifically, we need greater action on, but also a framework for, sharing liability or at least having aligned incentives between the source of fraud and us—we are effectively the last line of defence to stop the fraud occurring. A framework to align incentives and work together that can result in increased data sharing or potential liability sharing would be my key ask.

Chair: That is very helpful. Thank you very much for your time this morning. That will certainly help us with our deliberations in putting together our report. Thank you very much.

Examination of witnesses

Witnesses: Philip Milton, Alex Towers and Simon Staffell.

Q200       Chair: Good morning. Perhaps I could get each of you in turn to introduce yourselves. Mr Towers, would you like to start?

Alex Towers: I am Alex Towers. I am the Director of Policy and Public Affairs at BT Group.

Simon Staffell: Good morning. I am Simon Staffell, Director Government Affairs, Microsoft.

Philip Milton: I am Philip Milton, Public Policy Manager at Meta leading on fraud. Thanks for inviting me here today.

Q201       Chair: Before we go into questions on fraud, I just want to ask a question about a subject that is in the news at the moment, and that is about access to smartphones for under-16s. This has obviously come about because of the tragic death of Brianna and the comments of her mother over the weekend. Would any of you like to comment on what Esther said about access to smartphones for under-16s?

Alex Towers: It is obviously a really appalling situation, and they were incredibly powerful comments from the family as well because it is a difficult question. Increasingly, technology is obviously part of everyone’s everyday lives and something that many people rely on, including to keep their children safe, so basic access to phones and connectivity are crucially important. How we as a society try to get the rules and the boundaries right about who can access which services is really important. A major piece of legislation—the Online Safety Act—has just gone through, which addressed some of these questions, including around the right ages for kids to be accessing different apps. In a sense, Parliament has already had a good long hard look at this and has decided that, yes, more needs to be done, stopping short somewhat of the measures suggested in that interview. I can see complications with introducing a wholesale ban, I have to say, but we have to keep looking at this issue because the harms that come from some of the worst content that is out there are clearly really dreadful.

Q202       Chair: Mr Staffell, would you like to say anything about this?

Simon Staffell: I don’t have a great deal to add to that—I think that was well put, and really important points have already been made. Clearly, it is a really terrible case, and thoughts to the people concerned. The wider issue of safety online for children is clearly incredibly important and something that we take incredibly seriously. We actually published a piece of research yesterday, looking globally at this issue of child safety online. One of the pieces of work that we have done as a result of that is around parental controls and a toolkit, because we think the technology of allowing parental controls, and how people are able to allow children to use technology but in a safe way are incredibly important.

Q203       Chair: You have done this piece of work looking at keeping children safe, so do you know, does anywhere in the world do this?

Simon Staffell: The work specifically looks at the perceptions of children and how they are accessing technology, particularly with the latest developments in technology—their concerns around generative AI, where they are using technology positively as well as where they have concerns. The purpose of that is to shine a light on some of those issues and to address them. The piece of work that we have put out is there to try to address them, which is around how we put a toolkit in the hands of parents so they can have those controls for young people.

Philip Milton: I echo a lot of what was said there. It was a truly awful case and something that no parent should ever have to go through. We place an awful lot of emphasis on ensuring that the people that use our platforms are safe, both children and adults. We do a lot of work around age verification. Clearly, Parliament has done a lot of work on this through the Online Safety Act as well. We will continue to do our part to invest in the tools and technologies we have to ensure that people below the age of 13 cannot use our platforms and that people are safe on them when they do.

Q204       Chair: So is 13 the age at which young people can access your platforms?

Philip Milton: That is correct—13.

Q205       Chair: Let’s get back to fraud. Perhaps we will start with you, Mr Milton, because there were lots of mentions of your organisation in the previous session. I know you were sitting in the room and heard what was said, so perhaps you would like to explain to us why there is such a huge amount of fraud going on, particularly on Facebook Marketplace?

Philip Milton: Thank you for inviting me in today to speak on this really important issue.

Chair: We are very pleased to see all three of you, actually, because I know that a number of you were not very keen to appear before us. We are very pleased you are here.

Philip Milton: We are very happy to be here. The first thing to say is that we take the issue of fraud extremely seriously on our platform. The reason for that is that it fundamentally undermines the experience we are trying to provide for people. Our job is to provide a safe and enjoyable experience to people when they use our platforms, and the existence of fraud on our platforms, even if you don’t fall victim to it, fundamentally undermines that experience. The same is true for advertisers—the existence of fraudulent advertising fundamentally undermines trust in advertising. Those two cohorts—the people that use our platforms and the advertisers that pay to reach those people—are fundamental for the existence of our business. So, we are directly incentivised to do all we can to try to prevent this criminal behaviour happening.

Q206       Chair: You are not doing all you can.

Philip Milton: I disagree with that. To give you an idea of the scale of what we are putting in place to tackle this kind of thing, since 2016 we have invested $20 billion in safety and security. That is not slowing down; $5 billion of that was in the last year alone. We invest in the teams and technology behind this. So, we hire experts to look out for these kinds of things and to train our systems. We have hired 40,000 people to work across the company on safety and security, and half of that number are involved in directly reviewing content. So, we invest significantly in trying to prevent this kind of thing from happening.

Q207       Chair: You are investing a lot, but we have just heard from the previous panel about the scale of the online fraud that the banking sector is having to deal with, and your company Meta was particularly mentioned. So, you may be putting all this money in but it doesn’t seem to be working.

Philip Milton: I wouldn’t say it’s not working. There is an awful lot that we catch ahead of time. We take statistics like that extremely seriously, and we talk to the banks on a regular basis about how we can partner with them to try and prevent this kind of thing from happening and how we can work more closely together. We do take those things seriously. We have not as yet been able to see the methodology behind those statistics. The reason that is important is that there will be opportunities within it to see where we can work together.

I think it is important to explain what we see as a platform. What we see is often a very well disguised hook or trap that has been put in place by a fraudster, be that an ad or a piece of content. It is specifically designed to look convincing to the victim, obviously, and also to the layers of protection and detection that we have in place on the platform. Then we don’t really see what happens beyond that. What the banks see is the fraud actually taking place. They see the accounts involved, as you guys were recently talking about, and where the money goes and who is behind the accounts. The detection we take is based on preventing that initial content.

Q208       Chair: So you are trying to prevent but are obviously failing. Do you think that if you actually had to reimburse people who had been subject to a fraud because of adverts on Facebook Marketplace, that might make you be a bit better at preventing things in the first place?

Philip Milton: Our focus is on prevention, squarely, because we don’t want people to fall victim to these crimes at all.

Q209       Chair: But they are doing. That is the problem, isn’t it? Whatever you are doing is not working.

Philip Milton: Sometimes fraudsters will get through those protections. Fraud is probably the most adversarial harm type that we deal with as a platform, because fraudsters are directly financially incentivised to do all they can to get around the systems and protections that we put in place for the people that use our platforms. That is the nature of the crime itself; it is particularly adversarial. That doesn’t stop us from investing in trying to prevent it happening. But to be clear, there are instances where we are involved in the payment. For example, if you are an advertiser on Meta platforms you normally use your credit card to pay for that. Where that is the case, we will be financially liable to refund the amount that comes from that credit card. The same is true for some of our larger clients that we give credit lines to. If a fraudster hacks that account and uses that credit line, for example, Meta would be liable for repaying that. It is not true to say that we don’t repay in cases where there has been a victim of fraud; we do, and we are directly financially incentivised to do that.

In addition to that, as I said at the beginning, the level of reputational damage that the existence of fraud on the platform does to Meta is enormous, both in terms of the users of our platform and the advertisers who are core to our business. That is why we invest so significantly in order to prevent that kind of thing from happening.

Q210       Chair: One of the advantages of today’s session might be that the public do realise just how prolific fraud is on Facebook Marketplace. I am not sure that people do know that. I think that people genuinely go on there thinking that they can get the items that they want to purchase, but clearly there is a major problem with what you are providing.

Philip Milton: I would say that fraud is prolific in society as a whole. It is not just a Meta-specific problem. Fraud represents 40% of all crime in the UK. That isn’t entirely on Meta platforms.

Q211       Chair: We are fully aware of that, and that is why we are having this inquiry. Today, we want to hear about what you are going to do about the big problem that you have with Facebook Marketplace in particular.

Other Committee members want to ask questions, but I want to ask the other two members of the panel whether there is anything else they would like to say arising from the questions in the previous session. Is there anything in particular? We will go on to specific questions, otherwise.

Simon Staffell: I can come in with a Microsoft perspective on the issue as a whole. I add my thanks for the opportunity to talk about such an important issue.

I would put this in the wider context. This issue is obviously incredibly important to Microsoft from a number of perspectives. As a technology provider, it is essential that we earn the trust of our customers who are using the technology, and it is incredibly important to make sure that that technology is safe. Technology creates tremendous opportunities, but it also creates opportunities for criminals, as we have seen. It is really important—this was touched on towards the end of the previous session—that we think about the global technology infrastructure that is being used and exploited by criminals. To give some perspective on how we see that at Microsoft, in any one day we will track 65 trillion signals: 750 million digital data points per second. We use analytics across all of that, all the time, to try to understand these threats. In any one day, we would be tracking 160 nation state groups that are involved in cyber-crime, 50 ransomware groups, hundreds of cyber-criminal networks and a much wider cyber-crime ecosystem of people who are involved in this.

We have to think about that global network and about how we are trying to take action at the infrastructure level. We do that every day through co-operation with law enforcement on active investigations. How we take out the infrastructure—the networks—and raise the cost of doing business for those cyber-criminals in those networks is incredibly important, as well as thinking about what we do to make the products safe.

Alex Towers: I guess that it is the dynamic is slightly different in our industry. We deal with scam phone calls, spoof numbers, fake text messages and all those sorts of things. There is some cause for optimism about the fact that you can take steps to tackle some of those problems. Five years ago, as an industry, telecommunications was probably not doing enough. That was very obvious during lockdown, when a lot of people were being targeted and exploited through our networks.

In the last five years or so, we have done quite a lot to introduce technology that blocks calls, especially from overseas, and text messages. We do a lot of work directly with the banks, law enforcement, and the regulator to try to work out how else we can develop the technology, but also to look at where technology is going next.

Some of the problems you see on other platforms are inevitably there because fraudsters are looking for new opportunities and new ways to con people. Where they might previously have used a text message, they may now go to Facebook Marketplace or elsewhere. In the future, they will start to increasingly use all sorts of complicated and clever forms of AI and other new technology. So the problem is not going to go away. We have to keep vigilant and try to think ahead, as far as we can, about where we can go next. Some of this is a UK focus, but some of it is absolutely an international focus and a question of international law enforcement co-operation as much as anything else.

Q212       Kim Johnson: Good morning, panel. We heard from the previous panel how fraudsters are becoming far more sophisticated and we know that organised crime gangs are using social media platforms to advertise for fraud mules or to encourage people into criminality. What are you doing to try to prevent this and to take down those advertisers?

Philip Milton: On mules specifically or on advertising more generally? I will speak to both of those if that is useful.

Kim Johnson: Both, please.

Philip Milton: Every advertisement that runs on a Meta platform goes through something called our advertising review. What that means is that every ad is checked against our advertising policies before it runs—before anybody can see it on our platform. That includes all sorts of other things. We just last year rolled out FCA verification. What I mean by that is that if you are a financial advertiser in the UK and you want to advertise financial services on Meta platforms, you need to be verified by the FCA before you can do that.

On the point about money mules, it is a particularly difficult harm type to work with, as the banks said on this previously. Just to level-set, what we see on the platform is people recruiting money mules. Money mules themselves do not operate on the platform, because those are the bank accounts that are used by people in order to launder money. What we see is people trying to recruit that. The difficulty here is that what we see is often very similar to a normal social media post, so it could be someone holding up a wodge of cash and saying, “Hey, DM me for details!” That could be someone bragging about winning some money at the races, or it could be someone soliciting bank account details from somebody. So what we do is invest in teams and technology to try to look for signs of behaviours that mule herders might use to try to recruit those kinds of people. For example, we look for soliciting of bank information. We look for where people are messaging lots of people in a way that a normal account wouldn’t do.

Those kinds of things are key to finding that kind of behaviour. So is collaborating with banks on this kind of thing. We were part of a task and finish group with the online fraud group, with a number of banks, to look into exactly this problem, because of the expertise that banks have built up in dealing with mule accounts over time. It is that collaboration that I think will be key to being able to find this kind of content and remove it as quickly as possible in the future.

Q213       Kim Johnson: It might be difficult, but can you give ballpark figures for the number of cases that you have identified, maybe in 12 months, that fall into that category?

Philip Milton: It is extremely difficult and challenging to measure this kind of thing accurately, because of the nature and complexity of fraud. A good indicator of this kind of thing is fake accounts. I say that because not all fake accounts are used by fraudsters but generally speaking, if you are a fraudster, you will use a fake account. As an example, in the third quarter of last year, we removed 827 million fake accounts. We have become really good at finding them before users even report them to us, so 99.1% of those were taken down before people had even reported them to us.

Kim Johnson: Simon, do you want to contribute on that question?

Simon Staffell: On the wider issue of advertising, we take both proactive and reactive measures. That is proactive in terms of using technology to look across our advertising platform and making sure that we are mitigating risk, and reactive, when we have particular reports of concerns, by taking very rapid action. That is to the broader point. We are also taking the FCA block list and the FCA verification programme; we are also implementing that. We think those two things are both really important in reducing risk in terms of advertising.

To return to my previous point about taking action at the infrastructure level as well for these organisations, there was a case that our digital crimes unit was involved in last year. It was with an organised crime group that is referred to as Storm-1152 and which runs illicit websites and social media pages. That was an example of where we collaborated with law enforcement. Now, initially we were alive to their activities because they were selling 750 million Microsoft accounts—dummy, fake accounts for criminality—but they were also involved in other, cyber-crime as a surface, issues. So we obtained a court order in the US and used that to disrupt their infrastructure—taking their websites offline. That is a good example of where, to level up from the specific issue that you are raising, actually we can disrupt networks that are involved on such a huge scale, as well as taking action on the specific platforms. 

Q214       Kim Johnson: I understand. Alex, did you want to come in?

Alex Towers: We are not an advertising platform. I can give you some numbers, just to give you a sense of the size of the problem in our industry, or a sense of the size of the action that we now take, I guess.   In 2023, we blocked 130 million scam phone calls and 50 million SMS attempts at scams. The 50 million blocked is a big reduction; we have reduced the numbers we have seen by about 85%, because we introduced new technology called SpamShield that tries to stop all this stuff. We think the fraudsters have gone elsewhere with what they are trying to do, so the numbers are a bit down, but there is still a lot of dodgy activity out there. We still get reports every month from our customers to the dedicated textline you can send a message to—200,000 or so SMSs in a month. Some of those will be spam rather than scam, I guess—it is hard to disaggregate them—but it is still a lot. You are never going to remove this sort of activity entirely; it still exists.    

Q215       Kim Johnson: We know that fraudsters are using AI voice-cloning software and I just wanted to know, maybe from Meta and Microsoft, what you are doing in terms of trying to take that off platforms so that they are not being abused by fraudsters to make money.

Philip Milton: It is undoubtedly the case that AI is going to be a transformational technology for us as a society. It is something that we have invested in for years, in order to be able to detect this kind of thing, and actually we think that the advances in AI over time present an enormous opportunity for companies like ours to better detect this kind of thing. To give you an example, we have managed to halve the amount of hate speech on the platform in the last 12 months—down to 0.02%—and we have done that almost entirely through advances in AI. Hate speech and fraud are different harm types, as I have said earlier; fraud is a far more adversarial harm type than hate speech is. But that is just an illustration of the kind of opportunity that exists with using AI against this kind of thing.

Simon Staffell: It is a really important question; thank you for raising it. The technology is clearly creating tremendous opportunities but, to return to the previous point, it is also creating opportunities for criminals. Advances in AI are being used in a number of ways, including the one that you have raised. So we see attackers being able to refine phishing messages, scaling phishing messages, making more credible phishing messages, as well as creating synthetic imagery and deepfakes and the kinds of things that you have mentioned.

We look at that from a number of perspectives. One is as a technology provider and a company that is looking to make sure that people have the opportunity to use this technology. We have our own responsible AI programme with a set of policies internally that has been running for nearly a decade now. That is about making sure that the  principles we have about safety and security of the technology and how it is used are used, firstly, inside Microsoft—hub and spoke models so that every part of Microsoft has the tooling to know how to detect issues and make sure that they deploy the technology in a safe and reliable, secure, etc. way.

The most relevant parts of those kinds of responsible AI approaches are things like, at the model level, using red teaming, so you are testing the model and making sure that it cannot do the things that we are concerned about it doing and that the criminals might try to do, including for fraud; and then at the deployment stage, reviewing high-risk applications, so this programme within Microsoft has refused 600 sensitive uses where we decided not to take forward a particular project because we deemed that there was risk. So, all of those kinds of things are important.

Secondly, we are advocating for that within the wider technology sector. We have been working recently with NCSC on their guidelines for secure AI. We contributed a document that was signed by a number of other countries as well, so it is a really positive initiative—how do we take action to advocate for this in the wider ecosystem?

If I could make a final point to the specific issue of deepfakes, I think there is also a really important focus and agenda at the moment around provenance, and making sure that we know where the content that we see comes from and where it is using AI. We have been both ourselves developing technology but also advocating for the use of content provenance technology. There is a standard called C2PA, which is the content provenance and authenticity initiative, which is an initiative both with us and others in the sector—the BBC are also involved—which is about watermarking content so that you know if the imagery you are seeing is synthetic. There is an industry-agreed standard to show that to people so that they know what they are dealing with.

Q216       Kim Johnson: Alex, did you have anything further to add?

Alex Towers: Watermarking is a really interesting possibility. We have some R&D people working on audio watermarking to help spot things that are not what they seem to be. As a society, we are not going to be able to put AI back in the box, so the question is: how do we use it innovatively to try to combat the criminals? Ideas like that are really valuable.

If you buy a new landline, it already comes with technology that adds a new layer of protection to try to identify whether a number that is calling is a legitimate business or something that you ought to be suspicious of, and it gives you that information in real time. It is just a question of trying to find as many different ways of using the technology for good as possible, because the problems are obvious. We need to confront the increasing sophistication of fraud and the sheer scale and speed by which you could, if you wanted, perpetrate thousands of crimes at a time.

Q217       Tim Loughton: Mr Milton, can I come back to you? Did you say earlier that Meta is spending $20 billion on safety and security over five years?

Philip Milton: Since 2016.

Q218       Tim Loughton: So in the last eight years, you have spent $20 billion.

Philip Milton: That is correct. In the last year alone, it was $5 billion.

Q219       Tim Loughton: Okay. What was your turnover last year?

Philip Milton: I am not sure about last year. In the last quarter, which we just announced recently, it was $40 billion, I believe.

Q220       Tim Loughton: Okay. Your turnover last year was actually $134.9 billion, of which you just said you spent $5 billion on safety and security. That is under 4%.

Philip Milton: I believe that is about correct.

Q221       Tim Loughton: Is that high?

Philip Milton: I believe that is a market-leading amount of money to spend on this kind of thing.

Q222       Tim Loughton: What are you comparing yourself with? You are the market leader.

Philip Milton: Absolutely. I was just going to say that I think that is appropriate given the scale of the challenges we face. These are deep societal harms and we are one of the largest platforms in this space, so it is absolutely right that we spend the most money on it.

Q223       Tim Loughton: Who are you benchmarking yourself against?

Philip Milton: All the other technology companies, basically—

Q224       Tim Loughton: Such as?

Philip Milton: Such as Google, Microsoft, TikTok.

Q225       Tim Loughton: So they all spend under 4% of their revenue on this?

Philip Milton: I don’t know their precise numbers, but I believe that we spend a market-leading amount on safety and security.

Q226       Tim Loughton: You don’t know that for sure?

Philip Milton: That is my understanding.

Q227       Tim Loughton: What does “that is my understanding” mean? You are representing—

Philip Milton: I do not have the specific numbers to give you for other companies.

Q228       Tim Loughton: Right. What does “safety and security”, on which that $5 billion was spent last year, involve?

Philip Milton: It involves all sorts of things. It involves investing in the teams and in the technology behind that. It involves developing technology like AI, for example, to hunt out and find harmful content—be it fraudulent content, bullying or any number of other harms—and take action accordingly.

Q229       Tim Loughton: So it includes spend on personnel detecting fraud?

Philip Milton: It does.

Q230       Tim Loughton: It includes spend on moderators detecting and clamping down on hate speech, for example?

Philip Milton: It does, and on the tools we use to find that, such as the AI that has been used to clamp down on hate speech.

Q231       Tim Loughton: How many moderators do you now employ on hate speech across the whole Meta platform?

Philip Milton: We do not divide out by harm type because people tend to work cross-functionally on a number of different harms. We have 40,000 people across the company working on safety and security. Half that number—about 20,000—directly review content.

Q232       Tim Loughton: So 20,000 are directly reviewing content.

Philip Milton: That is correct.

Q233       Tim Loughton: How many people are directly trying to intercept fraud?

Philip Milton: We deploy technology to directly intercept fraud. Those people will review content when it comes to them. We will also review suspicious behaviours. As I said, what we tend to do is look at the content and the behaviour of the actor, and that is because the actor can change the content quite easily. We can have an image bank, or a word bank of common phrases that might be used, but fraudsters can change that quite easily, so we also look for key behaviours that fraudsters might exhibit that are not usual among our normal user base, and then we have teams that can look into those kinds of behaviours. We have technology that has been developed to be able to prioritise that so that we can ensure that we are using that technology to leverage it, so we can be most efficient, and I am—

Q234       Tim Loughton: Okay. So how many Facebook account holders are there?

Philip Milton: I believe that across our company, there are now about 4 billion accounts—across all our platforms.

Q235       Tim Loughton: Right. You have 20,000 people reviewing content in those 4 billion accounts, and you are spending about 3 and a bit per cent of your revenue on the safety and security of those 4 billion accounts. That is not very high, is it?

Philip Milton: As I say, my understanding is that those are industry-leading amounts. I do not think it is accurate to characterise it as the amount of people versus the amount of accounts, because, actually, this is about using and leveraging technology wherever possible to take action on those accounts.

I think it is fairly well recognised now that it is important to have a balance of human reviewers and technology, because those two cohorts are good at different things. Technology is really good at scale, but it is rules-based, so it is not great at understanding local context, whereas human reviewers are brilliant at understanding local context and those kind of subtleties, but they are not great at scale. So you are right to say we have large-scale platforms, but we invest in the technology behind those systems to ensure that we can review that scale of content.

Q236       Tim Loughton: Okay. So if we should not use the number of people in this area you employ as a metric, what metric should we review to be able to show that you are taking this problem seriously?

Philip Milton: I think the scale of our investment demonstrates how seriously we take this problem.

Q237       Tim Loughton: So you think that spending 3 and a bit percent of your annual turnover is more than sufficient, making you an industry leader. Therefore, you are doing fine just spending 3 and a bit per cent.

Philip Milton: I think it is hard to say that any amount will ever be enough. What we have learned over a number of years is that, actually, the scale of the investment is just one part of it, and that we will never be able to solve these problems working in isolation. That is why we work in partnership with organisations, through, for example, Stop Scams UK and the Online Fraud Group.

Q238       Tim Loughton: As we heard in earlier evidence, the trouble is that you are not stopping them. We heard that 80% of frauds start on social media. Lloyds Banking Group research has specifically found that 68% of all purchase scams are started on Facebook and Instagram, and particularly Facebook Marketplace. One of the financial witnesses we just had suggested that probably a third of sellers on Facebook Marketplace are involved in some form of fraud. That is huge. You are spending just over 3%—and you are happy with that amount—on trying to do something about it.

Philip Milton: Thank you for bringing up Marketplace again; I did want to come back to the point that was raised earlier. We speak to the banks regularly about the statistics that you mentioned. As I said, we have not been able to see the methodology behind them or verify them, or look for opportunities where we might be able to work with banks on that.

On Marketplace specifically, it is important to refer back to exactly what Marketplace is, and is intended to be. It is effectively a one-to-one local listings services, as was mentioned earlier. It is designed as something where you can list an old bike, your kid’s old toys or whatever and say, “Hey, if you’re in the local area, you can come and pick this up for free, or you can pick them up for an amount of money,” with that money being exchanged in person.

This is a good example of how fraudsters use tactics to try to subvert the way that these platforms are designed. What fraudsters will do is say, “Well, actually, I want you to pay for the item before you see it”, and “I want you to ship the item to me”, or “I will ship the item to you.” As a result of that, we have taken a number of different actions to try to mitigate those harms, both through the fraud charter and separately.

Before, we were having discussions around the fraud charter—for example, we remove the ability to ship an item on Facebook Marketplace, so there is no way to pay for an item on Facebook Marketplace and there is no way to ship an item on Facebook Marketplace. That is precisely because we were trying to design out the ability for fraudsters to use the platform for those purposes.

Q239       Tim Loughton: How do you monetise Facebook Marketplace?

Philip Milton: We do not take payment for items listed on Marketplace.

Q240       Tim Loughton: So you make no money from Marketplace at all?

Philip Milton: We do not. It is made available to users for free.

Q241       Tim Loughton: There is no advertising associated with it?

Philip Milton: The listings are not advertisements in the true sense, in that users do not pay us for those advertisements, or for those listings.

Q242       Tim Loughton: Advertisers do not use that space to promote their own competing products.

Philip Milton: Advertisers can use that space, but we are talking about people listing items for sale on Marketplace. Those are two slightly different things.

If I can finish my earlier point about the steps we are taking, through the online fraud charter and separately, we have also introduced cross-border filtering. That means that if you are based in another country and you claim to be selling something in the UK but you are not based in the UK, we will filter out those results so that people cannot see them. In addition, we are introducing warning messages in your Marketplace inbox, so that when you interact with a seller and say, “Hey, I’m interested in this item,” there will be a warning at the top of your inbox that says, “Do not exchange items until you have seen them. Use cash and safe payments when you exchange items.”

Q243       Tim Loughton: I am still trying to understand. You are saying that Facebook makes no money whatsoever through the operation of Facebook Marketplace.

Philip Milton: That is correct. When people list items on Facebook Marketplace, no payment comes to Facebook.

Q244       Tim Loughton: I understand that. You make no money from Facebook Marketplace.

Philip Milton: That is correct.

Q245       Tim Loughton: So why do you run it?

Philip Milton: Because it is a useful service for our users, especially in today’s climate with the cost of living crisis. The ability to find cheap second-hand items in your local area is massively valuable. I use it myself.

Q246       Tim Loughton: You are running Facebook Marketplace as a charitable concern.

Philip Milton: Not as a charitable concern but as a useful service for our users. Users use it because they like it and because it is useful to them.

Q247       Tim Loughton: Having a Facebook page is a useful service, but people pay for it in various ways, including the advertising that goes alongside it. No advertising revenue is gained by Facebook from your operation of Facebook Marketplace; given the moderating investment that you have to put into it, you run it purely as a loss-making concern.

Philip Milton: I would not describe it as a loss-making concern, because—

Q248       Tim Loughton: How is it making money?

Philip Milton: People using Facebook Marketplace are using Facebook. People being on Facebook is beneficial to Meta, because that encourages—

Q249       Tim Loughton: No, it is a loss-making concern. You have said that some of the $5 billion that you invested last year for safety and security is being directed at Facebook Marketplace.

Philip Milton: That is correct.

Q250       Tim Loughton: So it is a cost centre for you. But you have told me that no revenue comes in through Facebook Marketplace from the users.

Philip Milton: Not directly.

Q251       Tim Loughton: I did not ask you about it coming directly. I asked you a very clear question: does Facebook generate any revenue that contributes to the $134.9 billion to the end of September 2023 from Facebook Marketplace? That may be from its users paying a commission, or from advertisers using that platform space to advertise things, rather than seller-to-buyer adverts. You have quite clearly said that no revenue comes from those sources, so why are you running it as a loss-making concern?

Philip Milton: To be clear, advertisers can use the platform to advertise, and we benefit from users being on that platform.

Q252       Tim Loughton: They can and they do.

Philip Milton: They do.

Q253       Tim Loughton: So would you like to revise your statement to say that Facebook Marketplace makes money from advertisers on that platform?

Philip Milton: That is true, but the point I was trying to make—

Q254       Tim Loughton: That is not what you said earlier, Mr Milton.

Philip Milton: The point I was trying to make, Mr Loughton, was that you do not need to pay Facebook in order to list an item on Facebook Marketplace.

Q255       Tim Loughton: I understand that, but lots of other platforms, including others that you have, do not charge their customers directly but make their revenue by hosting adverts from commercial companies. That is how commercial TV stations make their revenue. I do not have to pay to watch commercial TV stations, although I have a TV licence, which pays the BBC—that is a different matter.

You have skin in the game. You are making money from advertisers by running Facebook Marketplace. If a fraud is committed between a seller and a buyer, what financial liability do you have?

Philip Milton: As I said earlier, where we are directly involved in a payment, we are liable to refund it. No payment happens on Facebook Marketplace; that is deliberate, because we are trying to design out fraud by encouraging people to pay for things in person so that they can see the item before they take it and are protected against things like purchase scams. No payment takes place on Marketplace.

Q256       Tim Loughton: If someone pays for an old child’s bike they have seen advertised on Facebook Marketplace and the bike does not arrive, you have no liability in that situation.

Philip Milton: That is correct. Obviously, we would encourage people not to pay for something to be shipped to them but to go in person to see that item and to pick it up. As I said, that is entirely how it has been—

Q257       Tim Loughton: If fraud happens despite your best endeavours and all the stuff you are supposedly spending this $5 billion on, there is no financial liability due to you.

Philip Milton: That is correct.

Q258       Tim Loughton: Okay, so you have no skin in the game. Therefore, what is the incentive for you to do better and to work more closely with the banks, who do have skin in the game? TSB have refunded 95% of people who have been scammed, and about four fifths of those scams happen on your company’s platform.

Philip Milton: I do not think it is correct to say we have no skin in the game. As I said at the beginning of the session, we are directly incentivised to ensure that the people who use our platform have a safe and enjoyable experience, because if they do not, they will not use our platform.

Tim Loughton: But it doesn’t cost you anything.

Philip Milton: Our platform does not exist without those people. Similarly, if the advertisers who pay us to reach those people see fraud on our platform, they are interested in doing business on our platform, and that directly harms our bottom line. The reputational damage that is done to Meta as a result of that is enormous, and that directly hits our bottom line. That is why we invest significantly to prevent people from using our platforms in that way.

Q259       Tim Loughton: What is your advertising revenue on Facebook Marketplace then?

Philip Milton: I do not believe we break it out to the Marketplace specifically, but our advertising revenue from the last quarter of last year was around $40 billion.

Q260       Tim Loughton: $40 billion?

Philip Milton: I believe so.

Q261       Tim Loughton: From advertising?

Philip Milton: From the last quarter of last year.

Q262       Tim Loughton: That is $160 billion for the whole year, so most of your money comes from advertising, for your various platforms, doesn’t it?

Philip Milton: I believe that is correct, yes.

Q263       Tim Loughton: How many advertisers have walked away because of their concerns about fraudulent activity on Facebook Marketplace?

Philip Milton: I do not have a number I can share with you on that. We have regular discussions with advertisers about the safety procedures we put in place for them and for people who use our platforms, but I do not have a specific number of advertisers who have walked away because of that.

Q264       Tim Loughton: Do you think there are many high-profile ones who have?

Philip Milton: I am sure there are.

Q265       Tim Loughton: There are? Would you be able to tell us, if you write to the Committee, who you have lost because of that?

Philip Milton: I am sure I can follow up with the Committee on that, yes.

Q266       Tim Loughton: You see, the point I am trying to get at, Mr Milton, is that you apparently have no financial skin in the game—let me qualify it as that—in terms of clamping down on fraud on Facebook Marketplace, other than, potentially, reputational damage. Unless there has been a clear link with advertisers walking away with their revenue because they are concerned at 80% of fraud starting on platforms, and 68% of it specifically on your platforms, you do not really have to take it terribly seriously. The $5 billion you are spending, out of an annual revenue of $134.9 billion—as I say, that is three and a bit percent—for a people-based service business is tiny. So you are not taking this problem seriously, are you?

Philip Milton: I do not agree with that at all, actually. To your point earlier, there is direct financial liability for Meta, as I outlined, where we are involved in the payment. If people advertise on our platform, and fraudsters somehow get hold of an advertiser’s account and use that account to advertise on our platform, we face chargebacks for the amount of money that has been spent on that advertising. Equally, if we have a credit line open with that advertiser—which we regularly do, especially if they are a larger advertiser—and a fraudster uses that credit, we will be liable for that amount. So there is a direct financial liability in that regard.

Q267       Tim Loughton: How much are you spending on protecting your advertisers?

Philip Milton: As I said, we invested $5 billion last year alone in in safety and security.

Q268       Tim Loughton: You could be spending the majority of that money on protecting your advertisers because they pay you, rather than on protecting the customers who use that platform, of which a third are being defrauded out of their money because they are, for example, trying to buy a child’s bike that doesn’t turn up.

Philip Milton: I do not have a breakdown of where that money is spent exactly.

Q269       Chair: I know you dispute some of the statistics that we have been given, so I want to ask you about the 2023 research by Lloyds Banking Group. It estimates specifically that someone in the UK falls victim to a purchase scam originating on one of the two Meta-owned platforms every seven minutes.

Philip Milton: As I said, we take those statistics really seriously. We meet banks, including Lloyds, regularly—

Q270       Chair: So you agree that that is probably correct—every seven minutes.

Philip Milton: I cannot verify those numbers, because I have not been able to see the methodology behind them. Meta operates the largest platforms in this space. It is likely to be directionally accurate that fraudsters will operate in the largest pool possible. That is why—

Chair: So every seven minutes.

Philip Milton: I have no idea if that statistic is correct or not. As I said, I have not been able to verify it, because we have not seen the methodology behind the statistics.

Q271       Alison Thewliss: I have a couple of follow-up questions. First, Alex, you mentioned scam phone calls and things like that that people receive. Presumably, a lot less of that comes through on landlines, because fewer people have landlines these days. As for calls that people receive on their mobiles, my general practice is never to answer the phone, but I will always look up the number later, and often there are websites that will tell me whether it is a known harassing or fraudulent phone call. What follow-up do you do, perhaps working with those websites or others, to trace who is making those calls and to deal with them?

Alex Towers: It is quite complicated. We can work out if the number is what it claims to be. So in terms of numbers that are pretending to be a UK number when they are not, we now have the technology to block. Last week, Ofcom announced a new consultation that was all about trying to extend that to numbers that appear to the customer as if they are a UK number, even if the operator knows it is not a UK number. That is another sensible step to take, I think. We can keep on pushing the boundaries of that. You will never know whether you catch absolutely everything, but we are quite confident that we are now much more effective than we used to be. The numbers of the calls we block—which is in the hundreds of millions a year—suggest that that is a big improvement from where it was a few years ago.

Q272       Alison Thewliss: Do you work with other tech companies—the likes of Google—to line up all that different information?

Alex Towers: We work with the other telecoms companies. It is quite important that all telecoms companies do the same thing, because we are all sending calls backwards and forwards to each other. One of the things that we think Ofcom probably could do a bit more of is just monitoring that absolutely every company in our industry is doing this in the same way, because it only takes one company not to, and calls can get through.

We also work very closely with the banks themselves. There is an organisation called Stop Scams UK—quite hard to say—which is basically the telecoms companies, the banks and financial services, and it is designed to make sure we share best practice. In some cases, we share data, where it is possible to do so. We have systems set up with them to authenticate payments and to give them information about people if we see dubious activity like SIMs being swapped on someone’s phone.

We also work very closely with law enforcement directly and with the Government. We are part of the National Economic Crime Centre’s work, and we are on the Economic Crime Strategic Board that the Home Office runs. There are all sorts of different partners we are trying to work with all the time, because it is a team effort.

Q273       Alison Thewliss: Do you feel you see enough outcome for the effort you are putting into gathering this data?

Alex Towers: Yes, I think so. The numbers of things blocked are clearly going up massively, and the numbers of instances, of problems on the network, are coming down, so, yes, we are absolutely seeing the right trend. We have to keep going, because there is always a new scam out there.

Q274       Alison Thewliss: They are very innovative, that’s for sure. Simon, is there anything particular to Microsoft and what you are doing in this sphere?

Simon Staffell: Yes. I can offer some perspective on telephony—specifically, obviously, I will defer to others who are more expert, but some areas are relevant to Microsoft. One is around tech support scams, which we are particularly concerned about. That is where people impersonate Microsoft or other technology companies, which is a real concern. We did some research in 2021 which showed that three out of five people had seen tech support scams. That could be telephony-based—someone calling and pretending to be a technology company—or it could be homoglyph domains, when the URL looks like Microsoft, but has a zero in there instead. We are very concerned about that.

One of the things that we are doing in those cases—again returning to the point about collaboration with law enforcement—is working with law enforcement to refer to, and take action through, the courts. There was a recent example of that, and it is a really good example of where the technology sector is working together. We referred a case to the Indian Central Bureau of Investigation because the call centres concerned were in India. That was a joint Microsoft and Amazon referral because we had both had this concern. That did lead to raids on call centres and arrests.

On another point, and on a slightly different tack, there is a really interesting opportunity with AI here. As with other areas of AI that have been mentioned, in terms of detection at scale by taking huge amounts of data, telephony is another really interesting example. As I understand it, in the past, fraud in telephony was predominantly detected using non-content metadata—numbers, alphanumeric identifiers, call frequencies and those kinds of things. There is a really interesting opportunity here, with large-language models, where you can look at the data from transcripts at scale, in a privacy-compliant way, so that you could actually notify the person being called, during the call, that something is likely to be a scam based on the pattern of the language that is being used, or the kind of call—so based on the text in the call, rather than just relying on that metadata.

Q275       Alison Thewliss: Lots of scams also originate through unsolicited emails that people get. I have had the same Hotmail account for decades now, and I have never had more spam than at the moment. Now, I know not to click on these things—I have a reasonable understanding of that—but why do these things still arrive in my account? You have designated them as spam and they are all potential frauds, aren’t they?

Simon Staffell: That is an incredibly important question. I will perhaps start with the data on the challenge and then come on to how we address it. The answer to your question on why it is happening, I think, is that it is constantly evolving, and the criminal actors that we are talking about here are innovating. We heard about some of that in the previous session; it is a continual battle to get ahead of the methods that they are using.

To give some data points to exemplify that, in 2023, we went from seeing 3 billion password attacks globally to 30 billion. That is 4,000 per second, and in April of last year, it was 11,000 per second. Those are all blocks, where we have detected and mitigated at source so that it has not got through, but it is really important to bear in mind—to your point about things coming through to your inbox—that, for every one of those, there is a potential victim that it has not got through to.

So the scale is increasing, the sophistication of the criminal adversaries, unfortunately, is increasing, and they are innovating. Take, for example, multi-factor authentication, which we would always advocate for—and it is a good opportunity to make that point here. We see criminal actors using MFA fatigue. That is a technique where they bombard people with MFA requests in the hope that some get through. We saw 6,000 attempts of that per day over the past year.

Another example, on phishing, is adversary-in-the-middle domains, where a criminal group uses a dummy proxy domain, between you and the actual real website that you are trying to get to, to try to trick you to put your details in there. We are seeing 8,000 adversary-in-the-middle domains at any one time. It is also a good example of the ecosystem, which we have already referred to today. Purchasing a proxy domain such as that—an adversary-in-the-middle domain—from the cyber-crime networks costs between $200 and $1,000 per month, so you do not have to be technically proficient at any part in the chain to be able to rent these services and then conduct a quite sophisticated attack.

So the challenge is evolving, and it is getting more sophisticated. We are using technology to try to prevent that at source all of the time, and we are blocking the vast majority—hundreds of millions of phishing attempts every week—at source. Email, of course, is no one company’s product—it is a protocol; it is a method of communication—but, in our products, we are doing everything that we can to scan emails to block things, and to see and take down URLs as well. That links to that point about phishing as an organised crime; we actually took legal action that resulted in 400,000 URLs themselves being taken down.

So we are tackling this at different parts of the ecosystem. Then, of course, when some do unfortunately get through, that is where we would really advocate reporting, and we would reassure people that we do what we can with reports to feed that back into the technical mitigations.

Q276       Alison Thewliss: Yes, but in a practical sense, if I am getting all these spam emails, your systems know that they are spam, because they have put them into that spam inbox. In reality, I am not going to go through that and report absolutely everything to you. I do not have time in my life to do that; nobody does. How is it that you can identify them as spam or potential scams and put them in that inbox, but you do not do anything further about them?

Simon Staffell: There is a difference between spam and unknown phishing.

Q277       Alison Thewliss: But lots of them are, “Click on this for this,” or parcel delivery things. Those types of scams are coming through.

Simon Staffell: From the perspective of how we serve all our customers, it is important that we are making these judgments about what should not be seen at all—that would be completely blocked if we are using the kind of technology that I am talking about, which is looking at scale and trying to block—and what is spam.

To give you an example, if there is a phishing attempt that uses a URL—to your example—and we notice that a certain kind of URL or a combination of a certain URL and an image is suspicious, we might build a pattern, a heuristic, into the system that says, “Detect this and block it.” But then we will likely find that the criminals will know that we are doing that, and they will change something.

For example, they might change the image into a table and put colours in it, so that it is no longer detectable as an image. That innovation is happening all the time, so we have to keep refining. But we should not just block everything by default. We have to take those judgments about what should be designated as which category.

Q278       Alison Thewliss: If I look in my inbox just now, I have replacement windows, personal injury claims, and free spins and bonus, and I have won an Oral-B toothbrush. None of these are things that I have signed up for. They all sound a bit scammy. Why are they still filtering through?

Simon Staffell: As I say, if they were known to us as phishing—so, if they were in the category of the millions of phishing attempts that we are seeing every day—we would be using our products and services to block them. There will be cases in which there are fine judgment calls to be made.

Q279       Alison Thewliss: This is coming through every day, and there is really no way of dealing with this other than getting a new email address, is there?

Simon Staffell: I cannot speak to specific examples without looking at them, but that would be the general approach: we would need to make those judgement calls about how we are using systems to block at source and prevent things from getting through, but then also make the judgment. We should not take decisions that mean that people cannot see their email.

Q280       Alison Thewliss: I am concerned mainly about vulnerable customers. I can look at that and go, “That’s a scam.” If you are quite vulnerable, you might be scammed by these things, and the bank will pick up the tab, won’t it?

Simon Staffell: Vulnerable customers are, of course, a huge priority for us. We would be working on accessibility and making sure that people can use the technology. There is also a linked point around how we communicate and about campaigns that we have recently been involved in with the Government. The Take Five campaign is all around how we communicate with people and make these things accessible.

One of the things that you will see in our technology is that we are making reporting as straightforward as possible, so that, where there is a concern, people can see that and flag it. Clearly, we need to do everything we can and keep investing, because of the nature of the innovation that is happening with the criminal networks, and we have to keep doing everything we can to reduce the risk.

Q281       Alison Thewliss: You do not know whether I am vulnerable. I have not had an account for a number of years. You do not really know terribly much about me.

Simon Staffell: All the technology, accessibility and making sure that we make the technology usable to all our customers is absolutely a priority.

Q282       Alison Thewliss: Can I move on to Mr Milton? If I open the Facebook Marketplace page for people selling things near me in Glasgow, I can see a three-bedroom flat, a bed, an iPhone and some trainers. As a person using this, how do I know which of these is real and which is going to defraud me?

Philip Milton: As I said, our advice to people is to use the platform as intended, which is as a one-to-one local pickup service. What that means is that, for each of those items, you would contact the seller, understand more about the item, and then go in person to see the item before you transfer any cash or pay for that item.

Q283       Alison Thewliss: Is it appropriate that you have such a wide variety in that marketplace? A flat and a pair of trainers are quite different things.

Philip Milton: They are different things. I believe that Marketplace provides an important utility for people to be able to list things for sale. I do not think having a wide variety is problematic as a result of that.

Q284       Alison Thewliss: There are significant differences in risk between a pair of trainers, a flat, a car, or a bit of technology.

Philip Milton: That is a really good point. That is why we put in place protections, depending on the level of risk that we see. For example, as part of the online fraud charter, we will be introducing increased verification for the highest risk items where you are a seller in those items. If you are a seller in the category of property, for example, and cars, there will be an increased level of verification for the sellers who are engaged in those sales, precisely because of that reason.

Q285       Alison Thewliss: Concerns have also been raised by Electrical Safety First and the Fire and Rescue Service about unsafe products going on there as well. Not only are people losing out financially because they have bought something, but they could pay with their lives or with damage to their property due to a fire. What protection responsibility do you have for that?

Philip Milton: Again, our focus is on ensuring prevention and ensuring that that does not happen. That is focused on ensuring that people see the item before they actually pay for it. That enables people to check that item before they buy it to check that it is what it purports to be and to check that it is safe to use.

Q286       Alison Thewliss: You cannot always tell as a customer, because some of these fakes are quite sophisticated, that you have bought a fake pair of hair straighteners that are going to set your house on fire.

Philip Milton: That is true. Sometimes products can malfunction in that way, and that is the case on Facebook Marketplace and on other e-commerce sites.

Q287       Alison Thewliss: You bear absolutely no liability for things like that on your platform.

Philip Milton: That is not how liability tends to work. As I said, when we are involved in the payment for the item, liability sits with us. There is no ability to pay for something on Facebook Marketplace precisely because we are trying to push down instances of fraud, as I said.

Q288       Alison Thewliss: Financial advice is a strictly regulated market, but quite often lots of people on Instagram offer financial advice for which they have absolutely no qualification to do so. If you search something such as #crypto, there are 17.5 million posts currently on Instagram with that hashtag. Is it appropriate that you have that level of misinformation, bad information, on your platforms so that people could then be at risk of being defrauded?

Philip Milton: When you search for terms like that, you bring up all sorts of things outside of financial advice, so each of those instances are not necessarily financial advice. You are right to say that financial advice is regulated by the FCA. As I said, we work really closely with the FCA to ensure that if you are advertising on our platform, you have to be pre-verified by the FCA in order to do so.

Q289       Alison Thewliss: There are 17.5 million posts on there, which could be terrible advice and which could be regulated in some way. You do not have any way of telling that with the volume of posts, do you?

Philip Milton: That is not quite true. I have alluded to advertisements. We also look out to detect certain types of things on our platform in organic content as well. If you are, for example, promising an unreasonably high rate of return, we have machine learning and AI technologies that will scan that content to say that that looks like it is against our consumer policies, and we can take action against that content accordingly. So it is not quite true to say that we cannot take action against that. We do.

Q290       Alison Thewliss: How many of those do you take down?

Philip Milton: I do not have a specific statistic to share with you on that at the moment.

Q291       Alison Thewliss: If you have people intervening on these things, you must have records of what you take down.

Philip Milton: As I said, I do not have a specific number to share with you. As I have said, we work closely with the FCA on this. I know that they have recently consulted on financial advice among influencers, and we have developed technology precisely to find this kind of thing.

Q292       Alison Thewliss: Would you be able to share with the Committee any information that you have on the number of things that you take down in the categories of things you take down?

Philip Milton: I can certainly take that away and look to see if we have information that we can share.

Q293       Alison Thewliss: There was a thing that I had looked at previously when I was on the Treasury Committee and we were looking into this. Your colleague at the time, Allison Lucas, the content policy director for Facebook at the time, was there. I had raised with her the issue of Facebook groups being used to recruit people to perpetuate frauds or crimes or things that were not quite correct.

“File on 4” had an investigation of people who were being recruited to be company directors from overseas via Facebook groups. Ms Lucas was not able to tell me much more detail at that time about what you were doing to take that kind of activity down. That was being used to exploit vulnerable people and to commit frauds in this country by people not paying taxes that they should have been. How much of this activity is still going on and how many of those groups do you intervene on?

Philip Milton: I think that is a good example of the level of adversarial behaviour we are facing with this problem. Fraudsters will look for any opportunity to reach victims, be it through Facebook Marketplace, advertisements or any number of things, including Facebook groups. That is why we invest significantly and have measures in place to try to detect behaviours from actors that look unusual.

I have referenced a couple of times that if you are a bad actor on our platform—a fraudster in particular—normally you will try to message an awful lot of people to try to get in touch and to improve the likelihood of profiting from that scam. We look out for those kind of behaviours and can take action against those accounts. We can remove or suspend them. If it is a hacked account, we can try to return it to the normal user. We take a lot of action to try to prevent exactly that kind of thing.

Q294       Alison Thewliss: You said to my colleague earlier that you have 4 billion accounts. It is a needle in a haystack, isn’t it?

Philip Milton: Fraudsters operate wherever they can. It is true that with the scale of platforms we are dealing with it is difficult to locate these people. That is why collaborative working is so important. It is why we work a lot with law enforcement in this space. We meet with the National Economic Crime Centre on a bi-weekly basis.

We had various law enforcement groups come to our offices just a couple of weeks ago to talk about the issue of pig butchering and to share intelligence on this. We know that we cannot solve this issue in isolation as a platform. We can absolutely put in place protections, and we should do that, to ensure that users do not come across this stuff where we can. But fraudsters will always find a way around those protections, because they are directly incentivised to do so. That is why partnership working is important.

Q295       Alison Thewliss: Which? was sharing again this week the phenomenon of where people are tempted into sharing their personal data via your platform. There might be some kind of quiz where you have to give your mother’s maiden name or your place of birth or the name of your first pet. All of these are pushing people to reveal personal data on your platforms, which could then go on to be used to defraud them. Why are you still allowing these quizzes to exist on your platforms? They are really evidently a way for people sharing data to be defrauded.

Philip Milton: What tends to happen is that people take people off our platforms. They will put a link up that takes people to a quiz or somewhere where they can enter their personal information. That activity does not tend to happen on our platform, precisely because we have seen it and tried to block it. Where we see suspicious URLs, we have the ability to block those as well.

Alison Thewliss: I see people doing this every week. You are not taking these things down at all.

Chair: I think we will have to conclude there. Could I ask each of you to write to the Committee with details of how many fraud reports you made last year and how many you passed on to law enforcement agencies to investigate? That will be very helpful. Also, if you have not already told us, could you give us your thoughts on the Government’s recent fraud strategy and anything you think is missing in it? Thank you for your time today. It has been very useful for the Committee to hear from each of you.