final logo red (RGB)


Select Committee on Democracy and Digital Technologies

Corrected oral evidence: Democracy and Digital Technologies

Monday 2 March 2020

4 pm


Watch the meeting

Members present: Lord Puttnam (The Chair); Lord Black of Brentwood; Lord German; Lord Harris of Haringey; Lord Lipsey; Lord Mitchell; Baroness Morris of Yardley.

Evidence Session No. 16              Heard in Public              Questions 187 - 199



I: Roger Taylor, Chair, CDEI; Oliver Buckley, Executive Head, CDEI.





Examination of Witnesses

Roger Taylor and Oliver Buckley.

Q187       The Chair: Thank you very much for being with us. I have to read out the police caution and then we can get cracking. As you know, this session is open to the public. A webcast goes out live and is subsequently accessible via the parliamentary website. A verbatim transcript will be taken of your evidence and put on the parliamentary website. You will have the opportunity to make minor corrections for the purposes of clarification or accuracy. Could you introduce yourselves for the record?

Roger Taylor: I am chair of the Centre for Data Ethics and Innovation.

Oliver Buckley: I am executive director of the Centre for Data Ethics and Innovation.

Q188       Lord Lipsey: This is a bit crude, but there are two ways of looking at public opinion on some of this stuff. One is that people are scared rigid by the thought that their homes and brains are being invaded by secret powers of whom they know nothing and for what purpose they cannot tell. The other is that, so long as they get free email and free internet access, they do not give a damn what is being done to them. From your knowledge, which of those do you lean towards?

Roger Taylor: I would say neither of them correctly captures the situation. One of the first things we did when we were asked to look at microtargeting was to start a process of engagement. We have conducted discussion groups across the country. The first thing to say about this is that a lot of those wildly diverging views are driven by a lack of knowledge and a lack of understanding of what is actually going on.

We organised focus groups across the country and then followed that up with some quantitative surveying, in order to have time to explain to people precisely what was going on, how their Facebook page worked and how YouTube worked, and to give them time to consider a number of scenarios about positive and negative consequences. What came out of that was that, first, when people learned about how these systems operated, there was a degree of surprise; indeed, when we pointed out, for example, that you might be able to identify that a teenager was suffering from anxiety, this produced a significant degree of concern.

As we worked through this and people understood how these systems operated, where they landed was, “This is a useful technology and we dont want it banned, but we think that these harmful activities that can result from it should be regulated”. The clear majority are in favour of statutory regulation, with a sensible, balanced understanding of how that could be achieved, focusing particularly on balancing freedom of speech: people might be allowed to say something, but that does not mean it is a good idea to recommend it and promote it to hundreds and thousands of people across the country.

That is where people ended up. They did not want this banned but they wanted it to be regulated, and for that regulation specifically to address exploitation of vulnerabilities and damage to society, for example democracy.

Lord Lipsey: Does it sound to you as though the Government’s Online Harms White Paper is getting quite close to where public opinion would like it to be?

Roger Taylor: Yes.

Q189       Lord Black of Brentwood: Your report on social attitudes drew a distinction between general online targeting and political advertising, particularly in its impact on individuals, which of course gives rise to the question whether and how they should be treated differently in public policy and regulatory terms. What evidence of support have you found for treating political advertising and more general targeted advertising in that way?

Roger Taylor: There are two issues. One is just that, in the quantitative surveying, more people were concerned about the potential misuse of political advertising than they were of commercial advertising. To some degree, that just reflects levels of trust in society more generally, but there was that background. In the further discussion groups about which issues we need to focus on, people were concerned that political advertising online could affect democracy adversely if not properly managed.

Lord Black of Brentwood: We have heard evidence from the Advertising Standards Authority, among others, and most people, when asked about regulation in this space, would, probably unprompted, name the ASA as an obvious regulator. Do most people think that political advertising is in fact already regulated in some way, or do they think it is a completely unregulated space into which some form of regulation ought to be put?

Roger Taylor: From our conversations, what people were concerned about with microtargeting was the degree to which you could separate out a group of people and talk to them in a way that was potentially manipulating or misleading them. There are two ways of responding to that: one is about the areas where people were concerned that a regulator should be able to act; the other is just making things transparentif people can know what is being said, that in itself provides a high degree of protection.

On political advertising, the ability to know what people are being told by different political parties was the mechanism that people were in favour of as a way of addressing concerns about political advertising.

Lord Black of Brentwood: It is transparency.

Roger Taylor: Transparency, yes.

Q190       The Chair: We have just heard from the Estonian representative that in Estonia the assumption is that the digital world would be regulated or that the laws applying to the digital world would be the same as those applying to the physical world, and that it would be hard to make a distinction. That being the case, we know how heavily regulated party political broadcasts, for example, traditionally are; we have been brought up with that and things like individual constituency sums of money spent, et cetera. Do you think that people are absolutely confused as to what is going on, or do you think that they are prepared to accept that digital is different?

Roger Taylor: This was very interesting in the discussion groups. People could clearly see there was a difference. Lots of parallels were drawn between broadcasting and books and magazines “If it is okay in a magazine, why should it not be okay online?” but there was a clear understanding that the costs of widespread distribution have now fallen so low, and given the precision with which you can target, that things can now be done that you just simply could not do with traditional media, and that calls for new regulatory structures. People understand that digital is different and calls for some quite specific regulatory efforts.

I might draw your attention to one aspect in particular, which is quite interesting, and that is the recognition that often we are dealing with situations where there is considerable public concern, to your point, but uncertainty as to the degree to which the problem is really occurring and how to behave in response. The notion of addictive technology and how it is affecting young people is an issue of concern that people are worried about, but there is a lack of a clear evidence base on the extent to which this is going on and the extent of the problem.

The regulator response needs to allow for an investigation and an understanding of the extent of harms that may be taking place and that people are concerned about. The regulator needs to have powers to do that kind of work in order to establish the most effective interventions. There are some things in political advertising where we can say, “Transparency in political advertising and political ad libraries would clearly help to address this”, and there is really no reason why we should not move forward with that now. There are other areas where there is a role for evidence-gathering as well as for direct regulation, because the extent of the issue is unclear at this point, because digital technologies are inherently less transparent. It is less clear how these are operating, who is seeing which messages and what the consequences are.

The Chair: Just to refine it, because Lord Black is going in a really interesting direction here, it is not so much a question of whether there is regulation as who does the regulating. Is that fair?

Oliver Buckley: It is who and how. To your opening point, there is probably a lot of consistency at the level of principle. The principles that we bring to bear, for example in the way we approach political discussion and advertising in an analogue world, largely apply in a digital world. We have been used to an environment where the marketplace of ideas in political discourse enables claims made by one party to be challenged and questioned by the opposition or by the media. We saw in the transition to digital the ability for some of these messages to go uncontested. Where we push for transparency, it is to enable the same principle to become effective.

The Chair: That is enormously helpful.

Q191       Lord German: Can I take this slightly wider? You talk in the key recommendations of your online targeting report about accountability, transparency and user empowerment. Where would you put the relative importance of the ability for people to have more user control over what they do, as against understanding what happens, which is the digital literacy bit? They are obviously not exclusive, but where would the balance between those two be?

Roger Taylor: Those two go hand in hand. The strong sense from the public was that their preferred solution was greater control, but their experience of trying to exercise that control using what is available to them now when trying to think through how they might exercise control was a recognition that in reality only so much could be achieved through user control. That does not mean that we should not do more and try to give people as much control as possible, but there was a recognition that user control in itself is not going to answer problems. There needs to be regulation, media literacy and a better understanding of how these systems operate.

Lord German: Is the media literacy bit just as important?

Roger Taylor: Media literacy and user control go hand in hand. With media literacy, you can have education programmes in schools, education programmes for the general public and advertising, promotional and information campaigns run by regulatory agencies, but part of that spectrum includes the labelling of products and the information that is made available to people. For example, much greater clarity in labelling where advertising is coming from is part of giving someone user control but also part of educating the public about how online systems work.

I will give you one very small example of this. When we were publishing our report we sent out some ads on Twitter that were targeted specifically at people who were involved in and were influencers in the debate on online media and microtargeting. If you clicked on the “Why am I being targeted?” it said, “You are being targeted, because you live in the UK”. You could see that that was clearly not why you were being targeted. The advert said, “We are targeting you, because we think you are part of this discussion”. There is a real issue there, and giving people much clearer information that they can access about why they are being targeted is part of what will drive media literacy and will address that general public understanding.

Q192       Lord Mitchell: Good afternoon. Has there been any overlap between your project on online targeting and your project on algorithmic bias? If so, what evidence have you seen of biases in online recommendation systems?

Roger Taylor: There is clear evidence of bias in online targeted advertising, often driven unintentionally. For example, in many scenarios it will cost more to advertise to women than men. If you do not know that and you run a job ad, you might find that you are advertising more to men than to women without meaning to, just because you are trying to control the costs.

The way these systems operate can result in, and have been shown to result in, unintended biases. In the US, we are seeing court cases being taken against Facebook because of this. There is an overlap, where microtargeting systems are one type of automated algorithmic decision system that can result in bias.

Oliver Buckley: To be clear, it is not that you have set out to target men because they are cheaper; rather, that you have a budget and the system automatically optimises where it places ads according to cost, and that just means that you may find it surfacing more to men than to women.

Q193       Baroness Morris of Yardley: The report talks about the need to audit algorithms and to share data with researchers, which both seem eminently sensible things. Could you say a little about what best practice might look like and what the barriers are to this actually happening? I was particularly interested in sharing data with researchers, because that always ends up being a problem and I am never sure why.

Roger Taylor: It does. It needs to be done in a way that respects personal privacy and commercial confidentiality, but we believe that within that space that there are mechanisms. To illustrate the point, many of the platforms themselves have engaged external researchers and done extremely valuable research to understand for example some of the impact of social media on mental health. There are examples of this happening already. It can be done in ways that respect people’s privacy and commercial confidentiality. We are arguing that the regulatory regime that is brought in should have the power to instruct that certain types of questions be explored in this way. We cannot rely on the good will of social media companies.

Baroness Morris of Yardley: It is tied up in that. Who would be able to do that? I imagine that it would be the Government, or someone like that, collecting the information in order to come to a decision to develop policy. If you are a researcher who is not researching for government but you have your own agenda, would that freedom apply to you as well?

Roger Taylor: At the moment, we envisage the regulator being the organisation with the power to instruct something, and it would have to be within specific parameters; this would be a controlled power, used not untrammelled but to address issues of public concern.

There are different kinds of scenarios that we might envisage. One might simply be trying to collect enough data to establish whether, say, a point that has been agreed as part of a code of practice is, in fact, being followed. If, as part of the code of practice, it is agreed that you should promote trustworthy health information sources over untrustworthy health information services, and the regulator believes that is not going on, how would it go in and address that question? That might be something they do directly.

This is an opportunity for the UK, because these are very complex systems and the understanding of how they operate outside the companies themselves is growing. It is growing as people leave the companies and move into academia, and as people in academia do more research. We have very good people in this country who are working on this question. What is good about this situation is that we have the talent and the capabilities in this country to be able to do this work.

Baroness Morris of Yardley: What do you feel motivates the technology companies not to do this voluntarily and to need legislative underpinning from the regulator? Is it mainly commercial worries?

Roger Taylor: Yes, and I can understand their point of view as well. One of the reasons for calling for regulation, as many of them have, is a desire for there to be a level playing field. You can, with the best will in the world, step forward and set a higher standard, but then find that all that happens is you are the company that is named in the newspapers as associated with a particular harm, when in fact you might be doing better than anybody else but all that everyone will remember is the name of your company.

Q194       The Chair: Given the slightly different nature of the major digital companies for example, the difference between Facebook and Amazon do you think these reluctances to actually engage will, in the end, benefit some over the others? I am very struck by the fact that Amazon’s relationship, for example, is essentially a consumer relationship but is voluntary. Facebook and Google are very different.

Picking up on the business of a level playing field, who would the creation of a level playing field benefit most, and how could we encourage it in our report?

Roger Taylor: It is a very good question. We say that a regulator should focus its attention first on large-scale platforms with the capability to disseminate user-generated content. That is where there is the greatest risk of exploitative material and malicious material being promoted, or just stuff that people are perfectly entitled to say privately but which it is really not a good idea to circulate to mass audiences. Certainly in our discussions with the public, that was a much larger concern than, for example, issues to do with the commercial use of data.

We are suggesting that a regulator might focus its attention somewhere initially because that is where there are particular risks that need to be addressed. I am not sure I would quite frame it in terms of whether those are the companies that have the most to lose. You could, conversely, say that they are the companies that have most to gain by there being a clear regulatory landscape in which they operate, because it is their reputations that are suffering the most from the current situation.

The Chair: That is very helpful.

Q195       Lord Lipsey: What I find curious is that the costs to the companies of keeping this stuff semi-secret seem to be very high indeed and the benefits to them are not absolutely clear to me. There is commercial confidence, but how important is that in this context? You keep wondering whether there is something else behind it and whether they are actually doing something very naughty behind these algorithms and do not want to have them out. Can you shed any further light on that?

Roger Taylor: If I was in their shoes, I might be worried that a lot of areas that we are talking about here are not simple to answer; they are often quite contentious. If we are talking about people’s mental well-being or about political polarisation, radicalisation and extremist views, we are dealing in areas where there are issues to do with freedom of speech and regulatory protection of public spaces and public dialogue, depending on how you regard these spaces. There is a risk of somebody taking a lot of information and constructing an argument that you are the devil incarnate. There is a lot of scope for interpretation in these issues.

There is a need to feel that there is an appropriately governed structure for the way this conversation takes place. That is how I would put it. If you are asking whether there something terribly wicked and secret that no one is owning up to, the answer is that we do not know, but my guess would be that that is not the issue here. The issue here is more a concern that this is conducted in a fair and balanced way, and that if somebody uses the data to make a particular claim, that claim can be contested and they have an opportunity to understand why someone has come to that opinion and perhaps present an alternative viewpoint, if they feel it is unfair.

Lord Lipsey: If the rules apply to everybody in due course, and if they are fair and sensible rules with a reasonable regulator, everybody should be in favour of that, because the costs to the firms are just not high enough to be worth resisting it.

Roger Taylor: Yes, I agree. That is why we are seeing many large firms acknowledge that they are now in a place where regulation is needed.

Lord Lipsey: I was involved with an organisation called Full Fact. Companies approach Full Fact and say, “Will you check our facts for us?” Subject to the necessary safeguards, they are very happy to do it.

Roger Taylor: There have been a lot of those initiatives, which are very admirable. However, statutory regulation will be needed to get everybody there.

Lord Harris of Haringey: Can I just follow up on this point about regulation? In another context, I have been told that the social media companies, particularly in circumstances in which it is about user-generated content, would much rather that there was a statutory framework that limited what could and could not be done rather than having to make some of the judgments themselves and then risk challenge, particularly in jurisdictions where there is a very strong presumption about free speech. Is that something you have come across?

Roger Taylor: Yes. One of the things we have been keen to try to separate out in looking at microtargeting is the difference between illegal content, where the response is to take down, and the more difficult area that is identified in the online harms White Paper, which is legal but potentially harmful content. It is not illegal to say that you believe vaccines are poisonous. That is a matter of free speech. But if you have an ability to promote information to very large numbers of people and if you were to promote that viewpoint more frequently than the viewpoint that your children might be at risk if you do not vaccinate them, you might be regarded as acting irresponsibly.

It is about separating out those freedom of speech issues from, as it is sometimes put, freedom of reach. Freedom of speech is not the same as freedom of reach. The ability to disseminate information very widely carries its own responsibilities.

Q196       Lord Harris of Haringey: That is helpful. Could I ask you about the accountability mechanisms and how effective you think they are for disincentivising harmful targeting or harmful algorithmic practices? Given the answer to that, what mechanisms, if any, do you think are needed?

Roger Taylor: We are strongly recommending that it requires a new regulator to be brought in here, because we do not believe that the current structures are adequate.

Oliver Buckley: I will link back, but I was going to elaborate on your previous question first. In relation to your prior question, in relation to the platforms’ desire for clarity from others about what might be acceptable or unacceptable online, it is worth flagging that there are two drivers of that. On the one hand, there is just not wanting to be put into an uncomfortable position, making what might seem to be invidious choices that are then liable to be criticised. On the other hand, these platforms operate at a scale that requires, as you know, a large degree of automation in the way content is regulated. The more specific the rules, the easier it is to write that into the code and the algorithms; and the less clear the rules, the more human intervention is required and the more difficult to scale that it is.

To come back to the question, the importance of codes of practice being developed is that they start to clarify where the lines may be, and we might envisage that over time a body of best-practice standards will develop to better enable platforms to demonstrate whether or not they are complying with the duty of care.

Lord Harris of Haringey: Could you be a little more precise as to what that mechanism might look like?

Roger Taylor: The proposal at the moment is for an online harms regulator. There would be a code of practice on the use of targeting, which would specify what would be regarded as irresponsible use of targeting. Promoting gambling ads to somebody who you might identify as having a gambling problem, for example, would clearly be an irresponsible use of targeting. Promoting malicious and inauthentic news over reliable news sources might, in a code of practice, be regarded as being a misuse of a targeting system.

I should stress that we have not specified in detail in our report the elements of a code of practice, but we have exemplified the types of issues which the public are troubled about. We have been very specific about ad transparency. The requirements for political advertising should be drawn very broadly and should apply not just to electoral advertising. One of the issues with the way the online world works is that you can be in a mode of constant campaigning; you can be gathering information all the time about what kinds of messages people are responding to and building lists of people and cohorts of citizens who share a common perspective.

That ongoing use of paid-for advertising in that political space should be transparent throughout, permanently transparent, much in the way Facebook has proposed, but, again, with a regulator, because, as we have seen with Facebook, there are loopholes. You can get round it. It needs to be done to a standard format, so that civil society can easily engage with this information, can analyse it and can understand who is saying what, where the money is being spent and what messages are being put out.

We have said the same thing about opportunity advertising, going to the point about bias. With job advertising, the first and simplest step is transparency, because if it is possible for people to see that job ads are being promoted primarily to particular groups, such as men rather than women, that will itself create a pressure on organisations to respond and, indeed, provide civil society or individuals with leverage to act against organisations that appear to be advertising in a biased way. We have been very clear about that element of it.

The other key element that we focus on, which needs perhaps further development in the White Paper, is the extent of the data-gathering and data access powers which the regulator needs to have. It needs to have strong powers akin to those of the CMA, for example, in order to be able to establish where harms are occurring.

Lord Harris of Haringey: The code of conduct would not be a voluntary code. It would be a code that would be there. There would be enforcement, there would be teeth and it would not rely on the good will of the various companies. They would essentially have to comply with it.

Roger Taylor: Yes.

Lord Harris of Haringey: You would then rely on civil society organisations to say, “Yes, but they are complying with the letter rather than the spirit”. Is that the sort of feedback loop you envisage?

Roger Taylor: Yes. We would expect the regulator to act directly with regard to the spreading of malicious, harmful information or targeting vulnerable people with inappropriate material. There are other areas, such as bias in job advertising, where we are saying that, in the first instance, greater transparency would create the pressures on organisations to try to advertise fairly and, if they did not, would also provide a basis on which people might decide to take further action. That could be a regulator or civil society.

Lord Harris of Haringey: Who would you expect to make the decisions about vulnerability? Would that be defined in the code, or would it be a generic statement in the code which people would then have to interpret in their own way, and if the regulator says, “Hang on. You really are not interpreting this very sensibly”, you can come back on it?

Roger Taylor: On the one hand, it is important to be principles-based, but, on the other hand, specificity or exemplifying what those principles mean is enormously helpful in enabling people to know how to follow it. That is really important.

On the question of it being enforced, regulation depends, to a significant degree, on people accepting the legitimacy of the regulation, so there is an issue here about building codes of practice where the industry understands how they are being developed and why they are developed that way and can respond appropriately to them. There is a task there.

Oliver Buckley: This is further reason why we are particularly keen to see greater access to platform data to enable the kind of research that will help to build the evidence base to underpin good regulation. You establish the principle and then you start to develop an evidence base that says, “Doing things in this way tends to enable the manipulation or the targeting of that kind of vulnerability”, and so over time we develop a better sense of the design choices and their consequences and therefore what good and bad practice look like.

Lord Harris of Haringey: We are interested, ultimately, in whether or not this can influence elections. Could a vulnerability be defined as gullibility or stupidity in other words, “We have identified this group of people as much more likely to believe this nonsense” even though all of us in this room would, of course, recognise it for the nonsense it is?

Roger Taylor: I will simply say where people landed in the debates with the public. There were stories of people who were perhaps emotionally vulnerable or had mental health issues being caught up in very extreme politics, often drawn in through information that did not breach the law but attracted people’s attention. You could clearly see that some of it was being done potentially quite maliciously. People were very clear that that could be regarded as an exploitation of vulnerability which they would expect a regulator to address.

Q197       The Chair: I do not think many people, certainly around the table at the moment, would disagree that the creation of judicial review, on balance, has been a very good thing, and yet, at the same time, it is somewhat under attack now; I see it as quite a consolidated attack at least to diminish the impact of it.

You talk about the regulatory situation. Let us say the ONS comes out with a clear set of statistics on whatever issue, but it is somewhat embarrassing to the Government, and the Prime Minister goes on television and says, “Actually, thats piffle. Thats just their view”. Where does the online harms regulator cut in? Let us say lives are at stake and the possibility of the Government not taking the ONS stats seriously is potentially very damaging.

I am trying to reach for what kind of framework you create and how the online harms regulator operates. You have Ofcom, you have the Electoral Commission and you have the ASA. How do you make this bite? How do you create a situation where authenticated data is protected from attack, possibly by the Government themselves?

Roger Taylor: There are rules on the publication of official statistics, and we have seen the statistics regulator take the Government to task at times for the way they have quoted statistics, so there is a mechanism there. There is the Representation of the People Act with regard to saying things about candidates. In the broad cut and thrust of political debate, we have landed on the traditional mechanism of creating a free and fair public debate in which ideas can be contested, you know what is being said by your opponents and you can respond. It is worth saying that creating this free and fair debate is difficult.

I will make one other point about this. The way digital technologies work is quite interesting. It is very hard to reach people who disagree with you, because the algorithms are designed to present information to people so that if you actively want to send a message to people who do not agree with you, you have to work really hard to make it happen, and it can be quite difficult to do. We are definitely saying that we should be aiming to create a free and fair debate. That is what democracy is about. It is about the citizenry ultimately making a decision, rather than a regulator of the truth of statements in political contests.

The Chair: I am making a meal of this, simply because we are living in an era where many of the rules that we assumed, even three years ago, are being questioned. There is this marvellous little booklet that Peter Hennessy has just put out referring to the end of the good-chaps concept of government. It is quite important. We have a debate here on Thursday about the future of the BBC. When all those things are up in the air, it is pretty difficult to create a new regulator that has new forms of authority, which most of the drift at the moment is agin or is trying to scrap.

Roger Taylor: The platforms have stepped up their efforts against co-ordinated inauthentic action. They are very much engaged in that and can be persuaded to go further. Having a clearer level playing field about promoting reliable information over inauthentic and malicious information will help to address these issues.

Ad transparency in politics is the crux here. It is about everybody knowing the messages that are being put out there and being able to contest them. Crucially, we need to make sure, particularly in elections, that communication works in a way that allows people to reach the audiences they need to reach.

I am less pessimistic than you are. A new regulator will be able to make progress. It is a new area. As Ollie pointed out, the way these systems work means that it is automated systems that are determining, at least in the first instance, whether or not something is a political ad or what kind of person would be interested in this kind of information. There will be a learning curve. There will certainly be an issue about building the right level of capacity and knowledge inside a regulator to be able to address that. The skills involved are still very heavily concentrated inside large companies. There are some real challenges about making this regulator work, but it can have real teeth.

The Chair: I do not wish to sound unnecessarily pessimistic. If we used the word “transparency” a decade ago, every one of us would have known exactly what we meant. The word itself now is up for grabs, which is very troubling.

Q198       Lord Lipsey: We are looking at the transparency thing, but there is also this question of the code of practice, which in general we can all be in favour of. If you look at existing regulators, the quality of the code of practice that they employ varies hugely. For example, leaving out the online activities of the ASA, that code is very widely accepted and nobody argues with ASA decisions, et cetera.

Financial services started doing exactly as you suggested. They laid down some principles first, there were going to be examples and it was going to be light-touch regulation. Every year, the rulebook expands by another vast amount without appearing to increase the effectiveness of these damned organisations at all. I sat on the board of one of them, so I do know what it is like.

We need to be a bit careful in saying, “Have a code of practice and you have solved it all”, because that code of practice and giving the regulator broad guidance as to which direction we want them to go in is terribly important.

Roger Taylor: I agree. Going back to your initial point that sometimes there is a dual position “This is terrible. This is very good” the public’s position was that there was no sense of crisis here. There was a sense that this was a useful technology, but that it was being misused and it needed to be reined in. There was no sense of imminent catastrophe.

It will take time for a regulator to establish itself and its working methods. Part of what it will have to do is investigate some of the concerns that have been raised and try to establish the degree to which they are legitimate and need to be addressed. We are not suggesting that this will transform the world overnight, but we are suggesting that this is a problem that it is appropriate to address in that way. Things like political ad transparency could be implemented relatively swiftly, but there are other issues that will take time; you are quite right.

Q199       The Chair: That leads perfectly to the final question. If the Government could do one thing to improve the regulation of online targeting, what would it be?

Roger Taylor: We would move forward with the creation of the online harms regulator, and, specifically, give it strong data-gathering powers and strong penalties to give it teeth.

Oliver Buckley: It needs data-gathering powers for its own purposes but also to enable appropriate research that will help build the evidence base that will enable us to make more informed choices about best practice and the impacts that these platforms are having.

The Chair: Thank you very much indeed. It has been very helpful.