HoC 85mm(Green).tif

 

Science, Innovation and Technology Committee 

Oral evidence: Governance of artificial intelligence (AI), HC 945

Wednesday 10 May 2023

Ordered by the House of Commons to be published on 10 May 2023.

Watch the meeting 

Members present: Greg Clark (Chair); Aaron Bell; Tracey Crouch; Katherine Fletcher; Stephen Metcalfe; Graham Stringer.

Culture, Media and Sport Committee: Kevin Brennan.

Questions 327 - 411

Witnesses

I: Jamie Njoku-Goodwin, CEO, UK Music; and Paul Fleming, General Secretary, Equity.

II: Coran Darling, Associate, Intellectual Property and Technology, DLA Piper; and Dr Hayleigh Bosher, Senior Lecturer in Intellectual Property Law, Brunel University.

Written evidence from witnesses:

 


Examination of witnesses

Witnesses: Jamie Njoku-Goodwin and Paul Fleming.

Chair: The Science, Innovation and Technology Committee continues its inquiry into the governance of artificial intelligence. Today, we are looking at the impact of AI on the creative industries.

We are very pleased to welcome our colleague, the Member for Cardiff West, Kevin Brennan, who is a member of the Culture, Media and Sport Committee—it changed its name very recently, as we did—and has taken a long-standing interest in the creative industries and their governance. Welcome, Kevin.

There are some interests that Members want to declare.

Tracey Crouch: I want to place on the record that I was a Minister in DCMS when Jamie was its special adviser.

Kevin Brennan: I have received hospitality from UK Music. I am a member of the Musicians Union, the Ivors Academy and PRS for Music. My brother is a member of Equity.

Q327       Chair: As no other Members want to declare any interests, I am pleased to introduce our first panel of witnesses.

Paul Fleming is the general secretary of Equity, the actors trade union. In fact, it represents professionals across the entertainment industry. Jamie Njoku-Goodwin is the chief executive of UK Music and, as Tracey just said, is a former special adviser at the Department of Culture, Media and Sport and the Department of Health and Social Care. Welcome, both. Thank you for joining us.

Mr Fleming, can you give us your appraisal of what you see as the actual and potential impact of AI on your part of the creative industries?

Paul Fleming: Thank you for having us. It is important to start by talking about where the big impacts are. You can break them into two different groups. One is performance synthesisation, which is the creation of a completely new performance using AI. The other is the use of AI to enhance or, indeed, create a performance itself.

In real terms, what does that mean? In the latter category, you might have a crowd scenemarauding hordes running over hills that have been multiplied thousands of timesor somebody in a video game, where the very nature of recording their body movements, sound and voice is to use it in a video game.

The second category, pure performance synthesisation, is the scraping of data—the collection of different voices—to create a windscreen advert in commercial radio. Those sorts of very predictable jingles and adverts that you hear are increasingly created purely by computers. They are things that are created from a mash-up of voices and sounds from different people. They are entirely new products or performances.

Within that, there is a subcategory, which is the deepfake. That is the malicious use of that technology. When we talk about performance synthesisation and the creation of new performance, we tend not to be talking about deepfakes, because they are very clearly a moral hazard. They are very clearly a problem. They are not a use to be regulated—they are something to be banned—but we can talk about them in a bit more detail.

What is the impact? The impact, very straightforwardly, is job and labour displacement. I referred to the windscreen adverts. It is a pretty stable part of our members income to do straightforward commercial radio adverts. It is the sort of thing that allows them to work in the more artistically fulfilling areas of the industry, which are traditionally lower paid—in particular, theatre and so on. That is a stable thing for people to do.

Who are these members? They are people who tend to be able to work from home, in home studios and so on. Obviously, the people in that group are disproportionately women and people with caring responsibilities, but the nature of casting people in audio work means that it is a great point of access to the industry for people from other under-represented backgrounds—black members and so on. Access impacts on disabled artists are minimised when they are doing that sort of work. That opportunity to earn is being reduced. Even in the crowd scene scenario, rather obviously, one has jobs being displaced. They have been displaced for some time through that work.

This is not a doomsday scenario. We are certainly not doom-mongers about the technology itself. It presents a lot of other opportunities for work creation. Video games, as I mentioned before, are impossible to do without an awful lot of AI technology. That presents a whole new frontier of income potential and new work for our members.

This is not something to be frightened or worried about. Indeed, there are ways in which even performance synthesisation can be used to generate income for artists and to respect their moral and legal rights going forward. When Peter Cushing was reanimated in Star Wars, which was some time ago now, the union, through its collective bargaining and our collective agreement, negotiated payments and rights protections for the estate of Peter Cushing.

Straightforwardly, what is the impact? It is income displacement on the downside, and new opportunities on the upside, but those new opportunities have to exist within a platform or framework of proper regulation to allow collective bargaining to thrive and to protect the moral and legal rights of our members.

Q328       Chair: That is very clear and comprehensive and sets us up well. Is it not the case that the move to filmed and broadcast entertainment was a fundamental challenge to live acting? Yet it turned out not to be a fatal threat, and gave opportunities to people in the creative industries, because a series of rights and protections were negotiated around that time. From reading your evidence, I think that that is what you are proposing. You think that there should be a similar set of rights and agreements to govern AI in the creative industries. Is that right?

Paul Fleming: That is a fair summary. I have commented before that the first time that an Equity general secretary gave evidence to a parliamentary Committee was very much in the context of cinema becoming a new boom in the 1930s, which was going to destroy theatre and music halls. We then had the same panic about how television was going to destroy cinema. We have recently had a panic about how streaming was going to destroy television and broadcasts at a particular time.

None of those things is true. What happens is that they simply add to the landscape. If people are given more leisure time and more income, they will do more stuff around the creative industries, entertainment and culture. I do not think that there is an intrinsic threat, but there are obviously problems with how the unregulated environment is currently operating. I am sure that we will get into those shortly.

Q329       Chair: Mr Njoku-Goodwin, can you relate that to the music industry? Can you talk to us about the impact of AI at the moment and what you see being around the corner?

Jamie Njoku-Goodwin: I would like to start by thanking you for having this session. I think that the impacts of AI have huge implications across society. There is a huge debate about AI going on in general. Sometimes it feels like the creative industries have been left out of that debate, but it will have a huge impact across the creative and cultural sector. I am incredibly grateful to you for having a session and putting the focus on this issue.

I would split it into two when it comes to AI in the music industry. I talk about AI as an assistive tool and AI that is generative. With regard to AI as an assistive tool, the industry has been embracing technology and innovation for decades. We use AI to identify copyright infringement, to look at audience analytics and for business models. Across the industry, there has been a real appetite to use new technologies and innovations.

There has been a lot more concern about where generative AI is going, particularly as regards how that interacts with rights holders and the current copyright regime. It is a basic principle that, if you are using someone elses work, you need permission for that and must observe and respect the copyright. That is the foundation of the industry. It is what the industry has built its success on.

The current model and innovations in AI are posing all sorts of challenges to that. The key thing for us as we move forward is that, yes, we want to embrace AI and want the UK to be forward looking and a world leader in AI, but it does not have to be an either/or. You do not have to throw your creative industries on the scrapheap.

How do we grow our AI sector in tandem with the creative industries and make sure that we have the right frameworks in place to ensure that our sector can continue to be a huge success? With 6% of the UK economy, the creative and cultural sector supports millions of jobs; it is a really important sector for this country. As we look to embrace these new technologies, we as an industry think that it is critical that we have these debates and conversations and make sure that we are growing our industries in tandem with each other.

Q330       Chair: Can you help us to think about what this means in practice? Mr Fleming was very helpful in describing crowd scenes, for example, that could be generated by AI, or radio ads that could be voiced by machines rather than humans. What is the equivalent in music now, and what could be around the corner?

Jamie Njoku-Goodwin: One example, particularly in the generative AI space, would be for an AI company to take a load of music and train an AI database on it. You would then generate new AI works that were based, essentially, on what had been ingested or what the AI had been trained on.

If the AI company is doing that by going to the relevant rights holders to seek permission to copy those works to use them to train an AI on, that is absolutely fine. There should be licensing solutions for that. The real challenge that we have as an industry, and one of the big worries, is that you have people taking the work of creators, which has a copyright attached, feeding it into an AI, using that to generate so-called new works and then, potentially, being able to monetise off the back of that, without respecting or recognising the inputs. The difference between the input and the output is incredibly important when it comes to working out what we need to be doing for a regulatory framework going forward.

Q331       Chair: Does that differ in any fundamental respect from people who generate text, for example? At the moment, the internet generally, bolstered by AI, can analyse large numbers of documents that people may have published for a particular purpose and distil from that some useful information. Is there a defensible difference between text and, in your case, peoples voices or performances?

Jamie Njoku-Goodwin: One of my frustrations around this debate is that we are reducing creative output to data—to numbers, sometimes. Rather than being treated as a creation that someone has created, to which they own the copyright and about which they should have the right to say, “I want to license this,” or, “I dont want to license this,” it is being seen as personal data, in the same way as health data, for example.

There are a number of differences in this space. Take the proposal that we have had for text and data mining—the idea that you should make it really easy for companies to take a whole load of data, feed that into an AI, take patterns and, essentially, identify new things from that.

I used to work in the health world. If we are approaching that from the health space, looking at health data and health information, text and data mining can be absolutely transformative and do amazing things, but there is a difference between static data and creative content that someone has created. They own the copyright to that creation and should therefore have the right to decide what happens with it.

Chair: We will go into some more detail on all of that. I turn to my colleagues, starting with Stephen Metcalfe.

Q332       Stephen Metcalfe: Good morning. Thank you for joining us. Paul, you mentioned deepfakes. Are deepfakes by their very nature, as the term suggests, always malicious—that is, something that is designed to mislead? Can you both give examples of deepfakes that you have come across within your individual sectors, or do they not exist? What impact do they have? How widespread are they? Paul, I think that you should start.

Paul Fleming: “Deepfake” is a slightly emotive term. I go back to what we understand them to be. There is performance synthesisation, which is the creation of something completely new that never happened. It may be a series of static images that are then brought together into a video—an image of a politician slowly moved into saying something that they do not particularly want to say, or perhaps something that they do want to say.

There are instances, as in the Peter Cushing example, where what you have is his performance, put together entirely with the consent of his estate, with the rights holders, through our collective agreement, to synthesise a performance. That is not a deepfake as we would use the term because there is nothing fake about it. The rights holders have been consulted and respected, and the collective bargaining framework has been used. It is performance synthesisation. A substrand of performance synthesisation does not involve the consent.

You can look at some of the work that we have been doing with our American colleagues—our American sister union, SAG-AFTRA. They say that over 90% of deepfakes that affect their members, who are a very similar demographic to ours, are pornographic, and the overwhelming majority are of women. Obviously, it holds a particular importance to our members because of the nature of their brandtheir image, their face, their performance, their intellectual property and who they areso it is of particular interest to us, but it is of broader interest to society as a whole. Deepfakes are not specifically a performance issue. They have a particular importance to performers, but they are malicious performance synthesisation. Legitimate performance synthesisation is a new form of work and new art form that can be regulated and so on.

Q333       Stephen Metcalfe: I will pick up on that in a moment, but I want to give Jamie the opportunity to reply.

Jamie Njoku-Goodwin: There have been a couple of issues in the news in the last couple of weeks, particularly the Drake case. I know that there are potential legal issues around that, so I want to be careful about commenting on it.

One of the examples that I would use goes a lot further back—back to the 80s. Ford asked the artist Bette Midler if it could use one of her songs. She did not want that, so Ford found someone who sounded quite a lot like her and used that person in its advert. It was then sued by Bette Midler for, essentially, passing them off as if they were her. That was not AI—it was using another person who sounded something like someone.

We do not have the same laws around image rights and personality rights here in the UK. They exist in the US, but not in the UK. You now have a system where it is very easy to create a likeness or the sounds of someone, but we do not have the legal framework to protect against that.

Q334       Stephen Metcalfe: I want to pick up on that. That is fake. Does it qualify as a deepfake because it is malicious, or is it just a copyright infringement issue? Are there examples of where that has been used to misrepresent a musician?

Jamie Njoku-Goodwin: Artists would generally feel quite maligned if that happened. It is a very personal thing. People do not create music purely to monetise it. It is a deeply personal, deeply individual act. Having someone else, or something else, purport to be you and present as if they are you is, as Paul said, tantamount to one of you seeing yourselves on BBC News saying something you did not agree with or did not want to say. Even if you did agree with it, it was not actually you saying it. It is deeply challenging to your views of personal freedom and individuality.

Q335       Stephen Metcalfe: How widespread is that in your sector?

Jamie Njoku-Goodwin: It is one of the things that we worry will grow and become much more widespread.

Q336       Stephen Metcalfe: Okay. Accepting that there is this problem and—going back to performers—that there is performance synthesisation, with permission, and deepfakes, without it, have you come across any systems that would tackle this? For example, I am aware that there are those who are investigating digital watermarks for images, so that you can establish whether this is the original image or whether it may have been tampered with. Are you aware of any way of doing that, either in music or with images? Do you have any suggestions as to how we might tackle it? It is all very well saying, “Regulate it,” but perhaps the same people whom we are trying to regulate will ignore that.

Jamie Njoku-Goodwin: It is difficult, but I know that it is something organisations and companies across the sector—and in the tech sector—are working towards. One thing we are quite keen to be engaged with the tech sector on is having a conversation about how we can use AI to identify copyright infringement, to work out where these things are. As you say, it can be very difficult.

One question that is often posed is, are there things we are listening to and seeing today that are deepfakes? As Paul said, it is a problematic term, for a number of reasons, but, for want of a better one, are there things that are deepfakes, though often we do not realise it? It is the sort of thing we want to engage on more, particularly with the wider tech sector, to try to work out whether there are tech solutions to this.

Paul Fleming: The problem with the debate around AI at the minute is that you technologise a lot of people out of it. I am not an IT whizz—I am a trade union bureaucrat. I do not understand the technology that goes behind a conventional film, let alone AI. That is not my role. Our role as a union is to regulate the relationship between the engaged and the engager.

One of the problems is that very often we look for technological solutions to something that is essentially a social, moral and legal problem. There is not adequate legal protection for performers performances. The copyright provisions are out of date in that respect. In this country, we do not have a strong way of regulating the use of peoples images, irrespective of what technology goes behind that. We have not ratified the Beijing treaty, which we have signed up to and which moves where the rights framework for performance makers is.

None of those is a technological question. I am sure that there are things that can be done technologically in order to safeguard and improve rights that exist. The problem, I suppose, is when those rights do not exist or are not strong enough. I accept entirely that there are lots of technological questions, but very often this debate ignores the very human impact that we would consider in any other area of changing working lives.

Jamie Njoku-Goodwin: The reason that I used the Bette Midler example was to make the point that we do not have the adequate frameworks in place for the tech people before you even get to AI. When you are having this conversation now, there is definitely a conversation that we need to have about what sort of protection we need for image rights and personality rights. This is an issue that is going to grow and grow, not just in the music world but across entertainment, the creative industries and society more broadly.

Stephen Metcalfe: That is very helpful and very clear.

Q337       Kevin Brennan: Good morning, everybody. Paul, you seem quite sanguine about AI—surprisingly so, some people might think. Are your members as chilled about these developments as you seem to be?

Paul Fleming: This is the first time that I have been described as chilled. I am a Luddite in the true sense of the word. The Luddites have a rather bad rap, as people who hate new technology. Our members do not hate technology. They are not particularly frightened of new technology. What they dislike is what the Luddites originally talked about, which is the malicious use of that technology in order to undermine their terms and conditions. Over 60% of our members believe that AI is going to have an impact on their work, but that rises to over 90% for audio artists.

Q338       Kevin Brennan: Isnt it an existential threat for audio artists, in the sense thatas you quite rightly described earlierthat kind of work forms the basis for the income of a lot of actors? It is just going to disappear, isnt it—except for a few celebrities, who might have their own soundalikes generated?

Paul Fleming: Quite, although recently I read about Stephen Fry and his completely generated version of “The Hobbit”, which has been put online without his consent. Everybody—in fact, particularly those people with recognisable voices—is at risk of it.

The reality is that work comes and work goes—work changes. If we look at the massive explosion of video games, for instance, which is a huge area of work for audio artists and in terms of movement, we see that it is dependent on a strong, diverse, creative workforce in order to make that happen. At the minute, what we are seeing is this new opportunity for work and income not being sufficiently regulated so that it can adequately replace that which is being lost.

Q339       Kevin Brennan: Is your overall message that there are threats, which is inevitable with any new technology, and there are opportunities, but there is a real need for some policy work here in order to make sure that people remain able to have some sort of control over their own creativity?

Paul Fleming: Absolutely. The Copyright, Designs and Patents Act is older than me. It needs serious updating. We committed to ratifying the Beijing treaty over a decade ago, which is almost as long as it took to negotiate it in the first place. We have not done that. Those measures would provide platforms for collective bargaining within our sectors.

This is the really important thing to recognise that makes the new areas of AI very different from the established areas of the industry. We have collective agreements with Netflix, Apple and the other big streamers—these new kids on the block. They are used to operating in a unionised culture. Our American equivalents operate closed shops, essentially, in the United States. We have 70%-odd or 80% density of trade membership across the traditional areas of the performing arts. Essentially, they are used to operating in an environment where we can regulate the Peter Cushing scenarios.

In video games, that is not the world they come from. The same applies to an awful lot of the AI scraping that you are starting to see, which affects our members performances. Recently a series of auditions uploaded to YouTube were scraped. These are peoples performances, essentially. These are areas that do not have any culture of unionisation. They need a statutory framework that does not say, “This is what people should get,” in our case, but does say, “These performers have rights and these rights should be enforced and proportionately remunerated.” That is a platform from which to build. That is the big worry that we have.

Q340       Kevin Brennan: Okay. Jamie, what is the difference between Ed Sheeran and Amy Wadge ingesting the music library of the past and writing a song that is a bit similar sounding but not necessarily infringing, as we know, and an AI being trained on the music catalogue of the past and producing some kind of musical facsimile of a previous song that is not identical? Soundalikes are nothing new in the music industry, are they?

Jamie Njoku-Goodwin: The former have souls. I say that flippantly, but there is a real human difference. There is a human element to it. I keep pulling people up whenever they talk about AI-created works, because my position is that AI does not create. AI can generate, but it has to take work that has already been created—created by humans. It simply generates from that. Essentially, it is a very clever mash-up. Rather than a mash-up of four or five songs we might dance to at a nightclub, it is sometimes a mash-up of 10,000, but it is always based on what has gone into it.

You can argue about whether it is exactly the same as my going into a room, being taught five songs by someone and then taking that, but that is not how the creative process goes. You have influences. You are looking at your human choices. It may be your cultural influences. It may be not just music you listen to, but your worldview. You are piling a whole load of stuff into the song—the artistic output—that you are creating. That is the difference between a human-created song and an AI-generated piece of music.

Q341       Kevin Brennan: Your organisation represents not only creators but the big corporates in the industry and the collection management agencies, which collect royalties and so on. Is there a universal, agreed view across your membership, under the umbrella of UK Music, on how to approach the subject of AI and creative output?

Jamie Njoku-Goodwin: Yes. There is huge concern about what happens if we get it wrong. Again, I am not coming to this with a negative view of AI. I see the challenges that we are facing with AI as very similar to the challenges that we have had with the internet over the last 30 years. Some things about it are fantastic and transformational. They have been fantastic for society. Others are going to be incredibly bad if we get the regulatory framework wrong. Right now, in this place, we are going through a whole series of legislative discussions about online safety. A lot of that is coming from whether we got it right 20 or 30 years ago. We probably did not, and there are things that we have to do.

We are coming at it from the point of view of how we can embrace AI for good, but the key principles of copyright and IP, which are foundational for our industry, are critical. When the text and data-mining proposals came through last year, the industry—whether it be song writers, labels or collection management organisations—was universally opposed to them, partly because the basic principle that the success of the industry is based on rights that are afforded to performers and creators is the foundation of the industry. There has been opposition from across the industry as a whole to a number of proposals that we have seen that have sought to undermine that basic framework.

Q342       Kevin Brennan: What is your view on the role of the online platforms? In the United States, there has been an attempt to use the fair use defence—which is not available, as I understand it, in the UK—around using AI to generate music, upload it on to platforms and make it available. What is your view on the role of the platforms themselves in all of this?

Jamie Njoku-Goodwin: There is fair use in the US, but not here. As regards working with the platforms to try to make sure that you are dealing with it in terms of infringement and enforcement, there is a view that the problem is that once the genie is out of the can, it is out of the can. It is one of the reasons that we are very focused on the ingestion and input side. If you get to a point where you are allowing or enabling people to take other peoples work, generate new things from that and then seek ways of monetising it, once it is out, it is out. It was described to me yesterday as chasing butterflies in a field.

Working with platforms on the enforcement and infringement side is absolutely key. We will continue to do that. It is why there is so much focus on the input side. What is ingested by an AI? Are they seeking the appropriate permissions for that? If you can get that right, it becomes a lot easier down the track in terms of enforcement.

Q343       Chair: There is an important distinction, isnt there, as to where you regulate? Your view is that the restrictions and governance should be around the ability of AI to draw on material that is out there, rather than around what it produces. Is that fair?

Jamie Njoku-Goodwin: There is an element of both. For example, we have no issues with an AI company or producer generating work that has been based on the work of creators who have given their permission. If I am a performer and someone asks me whether I would like to have my work ingested by an AI, I may say yes and I may say no. That is completely my choice. If you as a performer are given that right and you say, “Yes, I am happy for my work to be used in that way,and you grant a licence and have a commercial agreement, that is fine. We are not against that.

The problem is that, if you are not regulating properly at the input stage, you will not really know whether a whole load of things at the output stage are legal or whether they have sought the proper permissions. How do you know? Making sure that it is really clear, and that there is a very clear framework on the input side, makes a lot easier questions about things like labelling and databases, and when answering the question: “Is this work legitimate, and does it observe the right copyright and IP frameworks?”

Q344       Graham Stringer: Thanks for coming this morning. I come to most AI discussions with the prejudice that it is going to be excruciatingly boring and then am confounded by the excellent witnesses to this inquiry.

Jamie, you said something interesting—that all that AI could do was take a mash-up of previous works. Is that true?

Jamie Njoku-Goodwin: That is my understanding.

Q345       Graham Stringer: Let me put something to you that occurred to me while you were speaking. If you take all the instruments that you can, and play all the notes—it is all recorded by fairly basic performers, who are paid—and then tell AI, “We know what goes into making a Beatles song: produce new Beatles songs,” what is the difference? You might be creating a brand-new Oasis; is that really just a mash-up, because it is in a sense derivative of the Beatles?

Jamie Njoku-Goodwin: I think that the technological process is looking at dozens, hundreds or thousands of existing works, and the patterns that have gone into it. This is not just about music; it is what you do with science and research tech—data and text mining across the piece, looking at the patterns and establishing what patterns someone has used, and then taking the patterns and reproducing them.

That is why I say I do not think of AI as genuinely creative. I do not think of that as part of the creative process. I think of it as a machine that essentially just looks at patterns and reproduces them. I have no problem with that, but that essentially is what it is.

Q346       Graham Stringer: But it comes to Pauls point about Luddites, doesnt it? You are not stealing from artists at that stage. You are looking—it could be Bach, rather than the Beatles—at something with very clear patterns and creating something brand new, which still might put your members out of work, but would not be stealing from them. Do you take the point?

Jamie Njoku-Goodwin: Yes. You then probably get into a legal question about what constitutes copying. Is copying taking a minute-long fragment of something or the basic structure of something? In copyright law—I am sure you will hear this in the next session, where there will be lawyers with far more knowledge and expertise about this than I have—for something to have been copied, it is necessary to show intention. If you did not intend to copy something, you cannot be sued for copyright infringement.

We are talking about a process of actively taking works and putting them in, so it is clear that there is a process of wanting to take or copy. The question I suppose I would ask is: if there was not an intention to copy or to take something from the work, why was it being ingested or used to train an AI in the first place?

Q347       Graham Stringer: I think I have pursued that as much as I can.

How do you engage with the AI industry? What are your communications with them?

Jamie Njoku-Goodwin: Nowhere near as much as we would like it to be. One of the things that we are looking forward to over the next couple of weeks and months is the series of roundtables that the Government have been talking to us about, that they want to have with the AI sector. When text and data mining came through, we were quite frustrated, because we were not really clear it was what the AI sector actually wanted. We want to work with the AI sector as much as possible and be in as positive—

Q348       Graham Stringer: Did you go and see Google and the others, and talk to them?

Jamie Njoku-Goodwin: Organisations will be having conversations with them, but before this session I asked my members if there were instances of anyone in the tech world approaching us for a licence for ingesting using AI, and I have not had any evidence of companies coming to us to say, “We would like to seek a licence to use works to train AI.” There is a question there about whether companies just do not want to seek permission. It may just be that they are not used to working in this world—that they do not really understand how the licensing framework works in the creative, and particularly music, industries.

We want much more engagement with the AI sector, to work out what they need as a sector and how we could work with them on developing licensing solutions that work for our sector and respect copyright, but that also work for the AI sector and what they want as they move forwards.

Q349       Graham Stringer: Paul, I am aware of the campaign, “Stop AI Stealing the Show”. What else are you doing to represent your members and try to shape the future in their interests?

Paul Fleming: The “Stop AI Stealing the Show” campaign encapsulates everything we are attempting to do. There are three strands to it—to educate, to enforce, and to expand. I am here in the “expand” capacity, because we do not believe that the operating framework is strong enough, but part of it is about educating our members on their rights. Well over two thirds of them say that they do not understand the contracts that they have signed. In the last month or so, we have dealt with dozens of people who have engaged in a recording session—normally an audio recording session—and have discovered that they have signed away rights. Agents have not understood the contracts, and so on, and they have signed them away to be used in AI scraping systems without their knowledge, or without real understanding of what it is about. There are ways of stopping that, such as not signing those contracts; and we can enforce contracts that have been pushed a bit too far.

In terms of the successes of the campaign thus far, I am not sure that it is particularly good for the career of a Conservative Minister for me to say as a trade unionist that I am very grateful to Minister Lopez. She met us and we had a conversation about the data-mining exemption. She was very engaged with it and, essentially, that appears to have been dropped and gone away. It was a serious existential risk to Britains competitiveness as a place to make art and have creative industries. I am really pleased that she did that, and engaged with it.

We are taking cases now on a weekly basis where contracts have been pushed a little too far, or where people have become confused and frustrated when they are not used in an appropriate way.

Q350       Chair: Following on Grahams point, Mr Njoku-Goodwin, is it your preference, in representing the industry, that there should be a contractual agreement to allow the scraping of data and analysis of content; or do you not think it should be done at all?

Jamie Njoku-Goodwin: A system whereby someone who wants their data to be used in that way has a conversation with whoever is doing the scraping, and comes forward with some sort of licensing solution or commercial agreement, is completely reasonable. If that is how someone wants their works to be used, it is completely up to them.

The big problem that we have had is that scraping has happened without peoples consent or permission.

Q351       Chair: So it is a contractual thing. The products—the AI-generated tracks or musical pieces, or whateverare not a problem: it is the lack of contractual agreement to make use of the training data.

Jamie Njoku-Goodwin: Yes, its a lack of respect and engagement with the artist, and not seeking permission. The basis of it is a private property argument, essentially—in copyright. As long as the copyright has been observed and the creator and rights holder is given the choice and freedom to decide what is to be done with their work, it is up to them. I think there may be people across the industry happy for their work to be used in this way. At the moment, that is not really being respected and deferred to.

Q352       Tracey Crouch: Do you consider the existing IP framework and copyright legislation to be too analogue, in a digital world?

Jamie Njoku-Goodwin: That is a very a good question. It is definitely the case that when you look at the current overall copyright framework and ask questions about the advent of AI, and the current debates, there are a number of challenges.

It is interesting to look at proposals that have come from China in the last week, for example. They put up proposals on generative AI and have talked about a framework where the producers of AI products respect personal data and keep a database of what has been ingested—which is incredibly important in knowing what has gone into something in the first place.

At the moment a lot of work is being created. To take ChatGPT as an example, we are all having lots of fun playing on it and getting answers from it, but we have no idea what it has been trained on or the inputs that have gone into it. The same happens for music and across the creative sector. There are no real enforcement procedures and frameworks in place to enable us to know what has gone into an AI and properly ensure enforcement that can take down things like deepfakes.

There is a definitely a conversation about trying to make sure that the current IP and copyright framework is fit for the digital age. The basic principles that it is based on, in terms of respect for rights holders and creators, are key and fundamental.

Q353       Tracey Crouch: Paul, you displeased me immensely by saying you were not born when the Copyright, Designs and Patents Act 1988 was originally passed, although you then amused me immensely by saying the phrase “new kids on the block” in the same breath. Ironically, New Kids on the Block had their great breakthrough album in 1988. I remember because I was at secondary school and wore it to death on cassette tape.

You have said that you would like the Act to be updated. What, specifically, would you like to see in it?

Paul Fleming: I have referenced the Beijing treaty on a couple of occasions. Ratifying it would by its nature require the updating of the Copyright, Designs and Patents Act. When we talk about an analogue framework for a digital age, we are very much imagining a world in which someones performance is directed and the director owns the copyright as a freelancer, and the performance possesses some sort of performance rights, which we kind of acknowledge but do not consider as intellectual property. That is a gap that needs to be filled. That is what the Beijing treaty that we have signed commits us to—but for the failure to ratify it. It is a huge gap.

The creation of some system for image rights is really important. The Act falls down again on that, because it cannot imagine a world in which a series of stills is turned into a moving image that is a convincing impersonation of a person.

There are also questions about the proportionality of remuneration for the use of peoples copyright. People may well consent to the sale of their data at one point, without really realising the ramifications and whether their remuneration at that point is proportionate. There has to be some sort of challenge. That supports the collective bargaining framework that the union has, very strongly, across the areas where we represent our members—although it is not a requirement. When we are dealing with people from the highly non-unionised world of games and new tech, using performance, that sort of framework is needed, to jostle them into the same place as Netflix, Apple and other new kids on the block who are dealing with the creative industries in a responsible way because of the culture from which they spring.

Jamie Njoku-Goodwin: There are things that could be done that could make the enforcement of the current copyright regime even easier. I mentioned the point about databases. At the moment, in a copyright infringement case there will be a slightly strange process where lawyers will argue about whether someone heard a song, whether they intended to copy it, and how many times they had heard that sort of song. It is actually quite a hard, subjective process to go through.

With AI, it can be so much easier, with the right frameworks in place. We know that AIs have to be trained on content. A database of what AIs are trained on would mean that there would not in the future be those situations where someone says, “I think you have stolen this from me,” and there is a long, complex case. You would just be able to look at the database and work out what it was trained on. That would almost make things easier than they are at the moment. So there are things that you could be doing on that front. It is important, and it is one of the reasons we are keen to make sure there is adequate record keeping about what has been ingested, and what AIs have been trained on. That is definitely true in the music industry, but it is also true probably for a number of other spaces in the generative AI space in general.

Paul Fleming: It is perhaps worth saying that the Copyright, Designs and Patents Act does not engage with the concept of synthesising of performance at all. It just imagines that something is created by an individual. In the world of synthesisation, which is a mashing-up, a generation, the use of scraping of data, and so on, there is nothing new or unique. This is an untested area of law, but the Act is not particularly helpful because it does not make reference to such a concept. A straightforward amending of the Act would simply bring it a bit more up to date with respect to our industry.

Q354       Tracey Crouch: This is a global issue. Where are we in comparison to other countries? If it is a 100-metre sprint race and the finish line is best practice, where are we in the race?

Jamie Njoku-Goodwin: I mentioned China just now: the code of practice that China put out last week talks about companies having a liability and responsibility to respect copyright, to remove and not use any works that are the product of infringement, and to have adequate databases and record keeping.

We are going to be working with Government, who have been having conversations about what codes of practice should look like. I hope that it will be a similarly robust code that protects creative rights and ensures that copyright is respected. It would be quite strange if we got to a situation where the Peoples Republic of China was being more transparent and gave more respect to personal rights and freedoms, copyright and private property than the UK.

Q355       Tracey Crouch: I was going to say I dont know whether to be admiring or surprised at China leading the race on this.

You have already mentioned the data-mining exception. Are you confident that it will not be revived?

Paul Fleming: No.

Q356       Tracey Crouch: For what reason?

Paul Fleming: I think it would be foolish to be confident that anything will not be revived. The European Union is currently looking at a data-mining exemption, although much narrower than the one that was proposed here. Nevertheless, the Committee has to acknowledge that there are different sorts of races to be the best. Is there a race to be the most liberal or the most regulated—or best regulated? The best regulated environment means that people will want to create stuff here. That is what removing the data-mining exemption does, because your rights potentially have better protection here than from whatever comes out of the European framework.

The race to say, “You can develop your AI best here in the most liberal way”—you can have a data-mining exemption that covers absolutely anything: there is a real risk that it might come back. I do not think that the Government intend it to come back, but it would be foolish to think one can have great confidence in that.

It is worth pointing out how confused the proposal was. I keep making reference to performance rights, which are complex and ambiguous; but people have performance rights, which the law would expect to be protected, and it treated them as if everything is data. As we were saying before, there is a kind of conflation of performance and data, and the law does not really understand it working in that way; so you end up with a risk that all data becomes performance, or, indeed, all performance becomes data. That is an internal inconsistency in the proposal as it stood.

We want people to come to the UK to produce creative content because, yes, we are world leaders in technology, with the best facilities, and are ahead of the game with the tech, but also because they know that their rights will be best protected here. There is a phenomenal opportunity—because nowhere is quite getting it right, at the minute—to say, “We are the part of the world that protects your intellectual property and copyright, and your ability to earn, better than anywhere else.”

Jamie Njoku-Goodwin: There is a worry that we have a bit of a one-size-fits-all approach on text and data mining. I think a lot of this has come from the direction from which many of the proposals and the various reviews of the past couple of months have come. People may be coming at it from the point of view of health data, and I can completely understand why it would be made as easy as possible for AI companies to carry out text and data mining of things like personal health data—with the required anonymity and personal data protection regulation in place. I completely understand the reason for wanting a process for scraping huge amounts of data for scientific research. The challenge is that the creative industries have been caught up in that wider effort; and it would have a catastrophic impact on the industry.

Also, slightly farcically, that was not even what the AI sector necessarily wanted in the first place. If you look over the consultation responses, the AI and tech sector did not as a whole say, “The one thing stopping us basing ourselves in the UK is the fact that we cannot text and data-mine creative content.” In terms of a framework protecting rights holders across the creative industries, while there might be ways, places and areas where you would want to give people as much data access as possible, I do not think people from the creative industries want that. It would also cause the creative industries irreparable harm.

We dubbed it “music laundering”, when it first came through—essentially creating a situation where you would literally be able to take what is still, for want of a better word, other peoples music, run it through an AI, generate something new, and monetise it. Again, that is anathema to the principles of the creative sector, and the whole basis of the present copyright principles and IP framework. We hope that it will not come about. Again, the Government have said that they recognised that it was not good for the creative industries, and have gone away; but we are going to continue to engage to make sure it does not come back in any form.

Q357       Tracey Crouch: You will know, of course, that Sir Patrick Vallance, in his report on emerging technologies, said that the Government “should enable mining of available data, text, and images and utilise existing protections of copyright and IP law on the output of AI”. Presumably you guys have responded to that recommendation, which the Government, I believe, have accepted.

Jamie Njoku-Goodwin: The Government have come back saying that there will be a series of roundtables and discussions about what that actually looks like. We want to make sure that the relevant copyrights and IP frameworks are completely respected.

It comes back to this point that the phrase “data access” keeps coming up in reviews. I do not think of music as data. They are two different things. I can completely understand why, in many spaces, text and data mining is going to be necessary—if it is for public data or health data. There are spaces where I can understand why that is required, but those are completely different from what the creative industries are doing. There is a real danger of taking a one-size-fits-all approach that ends up creating not huge incentives for AI companies to base themselves in the UK, but a loophole in the law enabling bad actors to take content whose copyright they do not want to respect, profit from it, and cause huge harm and damage to the creative industries.

You would end up almost stuck between two things—hurting your world-leading creative sector but not doing the things that the tech sector and AI sector say they really need if they are to base themselves in the UK, so we can be that country at the forefront of the AI revolution.

Paul Fleming: Presumably when Jamie says “bad actors” he is not referring to my members, but there is an important point here about types of creative industries and what we want in this country. One of our greatest exports is our creative industries. AI by its nature is intrinsically quite regressive. It can exist only on the data it is provided with. That data comes from a disproportionately white and male group of creators throughout history, which means that essentially you find yourself stuck.

AI has the opportunity to create new income streams for an increasingly diverse creative workforce, and support working-class people, black artists and disabled artists into spaces where they have never been before, by providing work such as in video games, audio and so on.

Essentially, with a phenomenally liberal framework that simply deprives people of jobs in the first place, and of proportionate remuneration from the intellectual property they have created from work they have done, you end up with a shrinking group of people creating art.

In the midst of the pandemic, 46% of our members were excluded from furlough or self-employed income support and 90% of our black women members were saying they were going to leave the industry. If you think about the industry that has been created over millennia—the sorts of data that we have about art, creation and performance—how do we both advance as a society and create the framework for the workforce to make us a world-leader in creative industries, as we are now?

That should genuinely worry people. It is a serious sector of our economy, but it is also a serious sector of our society, and the way we understand ourselves and develop morally and culturally as people.

Q358       Aaron Bell: Thank you both. It has been fascinating. If I can just follow up on Traceys question about Sir Patrick Vallances recent review, Sir Patrick recommended that the creative and AI industries try to come together to agree a code of conduct, and that the Government should legislate only if they cannot. Do you think that is realistically possible with the AI industries and yourselves?

Paul Fleming: I think it oversimplifies the problem, because the issues that we are talking about are not intrinsically technological issues, yet often the Government, the White Paper and the responses treat it as a technology issue, and not as a copyright and creative issue.

The Government have signed the Beijing treaty, but—I know I keep on saying it—ratifying it is the step that brings it along the line and creates new rights and a new framework. That is something that we have committed to, but we are silent on it at the moment.

As to the idea that it is rights creators and the tech that is running the technology, that is not the way we think about any other area of the creative industries. We think about the engagers—the people who are using the technology—reaching reasonable arrangements with the people who create the art and the entertainment. That framework, because of the new challenges that AI focuses on, simply is not adequate. It does not take a lot to put us in a world-beating position.

Q359       Aaron Bell: You need a framework, but do you think you can come to one with the industry yourself, or do you think there will have to be legislation?

Paul Fleming: I believe, with respect to our members, that if copyright protections are strong, the gaming industry, for example, will come to us and reach collectively bargained agreements, and that is what we will do; but that is not a question about our relationship with the people developing AI. That is a question about our relationship with the people who engage our members, who use AI.

Jamie Njoku-Goodwin: This is not necessarily new. We have always been very good as an industry at embracing new technologies, innovating, and working out licensing solutions. There would have been conversations decades ago with the film and TV sector about the licensing framework. There were conversations with the gaming sector, working out the licensing framework for game soundtracks, and there will be conversations with the tech sector about what the licensing framework will look like, for generative AI and access to content and data.

I go back to the point that to the best of my knowledge we have not had hundreds of companies banging on the industrys doors saying, “It is really frustrating; we are desperately trying to get access to data to train our AIs on, but we just cannot get access. The licensing framework is not working; it is completely broken.” We have not had people coming to us saying that, or anyone approaching us on that. We want those conversations. We want to go through the process to work out what our framework should be, but at the moment we have not had those approaches. That is the conversation that will be going on in the roundtables and conversations with CMS, DSIT and the IPO over the next couple of months. I am sure we are going to come to some sort of solution. That is how the industry has worked over decades.

Q360       Aaron Bell: You dont think we might already be too far down already to get on top of it. It could be similar to what happened with Napster in the first place. The industry never really recovered, because Napster got there. The technology was out there and the old model was then broken. You never went back and eventually you built a new model on top of it, which involves a lot more live performances, for example, and more tours than albums and the things that used to make money for your artists. Are we not already too far along with AI, given what we have seen in the last few months, realistically to try to restore the status quo ante?

Jamie Njoku-Goodwin: We are definitely trying to keep up as much as possible. It is amazing, just looking at the first sessions of this Committee, a few months ago, to see how far things have come, even in six months. We are not at the end yet. Lots of people are acting and talking as if what we have at the moment in generative AI is where we are going to end up, and actually we are only at the starting point. It is a nascent technology and we will probably be seeing a whole lot of other things from it. Yes, as an industry we want to move as quickly as we can and have the conversations, but it is key that we base those conversations and that framework on the strong copyright and IP principles that are so important for our industry.

Q361       Aaron Bell: I want to revisit Mr Brennans excellent question about what essentially is the difference between what AI does and what humans have already done for years—and of course your members rely on the works of Pachelbel and Shakespeare, and all sorts of people whose works are out of copyright and patent law. You said it is because people have souls and so on, which is fine, but it sounds like special pleading, a little bit, from one particular industry.

Are you seriously suggesting that if AI has been trained on 10,000 or 100,000 pieces of data—and it is not a case of passing off, like your Bette Midler example, or the Drake soundalike example of a couple of weeks back—all 10,000 of those people are entitled to a share; or is that work therefore illegal if they do not all get a licence, if they have been scraped in some way? What would your outlook be on something that is released, which does not sound like anything in particular, but sounds like a pop song—essentially what Stock Aitken Waterman were doing in the 80s, generating a hit factory—without actually sounding like anything in particular? Are you seriously suggesting that any of your members who have been anywhere in the training data should be getting a cut from that?

Jamie Njoku-Goodwin: There would be a commercial agreement for it. Someone might actually say, “I am very happy for my work to be used in this way. This is what I would expect.” You would have a commercial agreement and it is up to them as individuals as to how their work is going to be used. People will have different views about it.

Q362       Aaron Bell:  Are there not enough people out there who are either out of copyright themselves, because it is so old, or because they signed it off in some contract that maybe they should not have signed, as Mr Fleming alluded to earlier, to mean that there is just going to be enough training data out there that is acceptable, and the fact that you can exclude one very minor artist from that training data does not do them any good?

Jamie Njoku-Goodwin: In which case, why is there even an exception to copyright?

The question that it is quite helpful to ask is: why? What is the actual benefit we are getting for this? People may be different, but I do not normally listen to songs and think, “This is good, but its just a bit too human. I would love to have less humanity in it.” Paul said it is regressive. There is nothing that pushes forward the genre. The whole nature of this is not about looking at the human condition and putting your artistic stamp on it; it is essentially looking at what has gone before and basing things purely on what has gone before.

Generative AI would not come up with the “Rite of Spring”; it would not come up with “Bohemian Rhapsody”; it would not look at all the patterns of pop music and say, “You know what, putting an operatic section in the middle of a song is exactly the sort of thing you would want,” but some of those art works that go against established trends and patterns have been some of the most profound moments of human artistry. The question is: how can we use this technology in a way that enhances and enables human artistry but does not replace it?

The really valid question, when it comes to AI and the creative sector, is: why? I think that is a very good argument for why we want to use AI in the health world and in public data and transport. What are we achieving with that? I am not saying that is an argument against using it.

Q363       Aaron Bell: Personally, I would not be so sure that it could not write one of those things. You can ask it to write something completely derivative, but you can ask it to write something original, potentially.

Mr Fleming, you talked about the audio people not having the windscreen wiper adverts and video games. Are you suggesting that we should try to regulate audio in video games or radio advertisements to preserve jobs, or do you accept that that loss is going to happen?

Paul Fleming: Our members talk about having portfolio careers. The way in which the union bargains and the way in which our members expect their working lives to roll is to get different bits of income from different sectors of the industry. AI presents the opportunity for them to obtain different streams of income from different areas of the industry.

Is that work gone? There may well be windscreen manufacturers who will go with the trend of real people advertising their windscreens in the future. That may be the next thing that happens, but these things ebb and flow.

The problem is when we see our members performance reduced to datamashed up and used to create things that would otherwise have been work without returning to them a proportionate amount of money. That means they are unable to go and work in a theatre production, because they do not have enough money to carry on. That is the big risk.

Essentially, we need a copyright framework that ensures that rights are attached to these things. These are new areas, and not just traditional TV, film and audio producers occupy this space. When a new tech platform starts to occupy the cultural and entertainment space, they are brought into that proportionate system.

Q364       Aaron Bell: I do not think the general population massively values windscreen wiper adverts. They do value theatre. Is it not the case that we should be paying more for theatre, if that is the answer? Rather than trying to find a way to shore up the secondary source of income that people are using to balance their own careers, people should be paying more for what they actually enjoy in their leisure time. If AI is to do anything for the world as a whole, it will give people more leisure time, which was alluded to earlier. Rather than trying to shore up a fairly unfulfilling part of peoples careers, is not the answer that we need to be rewarding the bit that is fulfilling and rewarding live entertainment more? Obviously, that will come about naturally through ticket prices and all the rest of it if that other bit is not there?

Paul Fleming: It is great to have your support for our campaign for increased arts funding against the closure of the Oldham Coliseum, the English National Opera and those existential threatsthe fact we have lost the best part of £1 billion from public subsidised art in the past 10 years and so on. This is a multiple assault.

The reality is that what the union is saying is that our members have always worked in different fields and it is not unreasonable for that to continue. Yes, they should absolutely be paid more in theatre. Currently, we have a very ambitious set of pay claims in. I am sure there are Members of the House of Lords from the commercial theatre sector who will be able to discuss the merits of those in great detail with you.

The truth is that there is a moral question here. When people create work, that is their intellectual input; that is their lived experience and who they are. It is years of training and craft and their empathy and solidarity with the very nature of the human experience. Do we believe that they have the right to make income from that work? Do we believe that they have the strength individually to stand up to the ways in which that might be exploited proportionately?

Through their union they can do that, but we are struggling to have the level of engagement with these new areas without a proper framework that forces these new producers to talk to us in that way.

Q365       Aaron Bell: I am a huge supporter of the New Vic in my constituency and all the money it received from the Government during covid.

A final question to both of you: what would be the single recommendation that you would like us to put in our report regarding AI and the creative industries?

Paul Fleming: There are lots of things that we would like, but as a pragmatic bureaucrat I go for the things that are perhaps the most achievable. The ratification of the Beijing treaty is absolutely critical, with the amendments to the Copyright Act that flow from that. It is around synthesisation, which I have mentioned, but also around image rights and so on. I think those are very straightforward changes that can be made.

Jamie Njoku-Goodwin: Overall, embed the basic standards and principles of copyright and IP. I mentioned specifically database and record keeping. It is absolutely essential that there is an obligation for keeping the data and record keeping on what has been ingested into AI. There is a whole load of things we will want to be doing probably from a regulatory point of view down the track, which will be possible only if we have that database and record keeping from the very start.

Record keeping is absolutely critical, but as a broad principle we must recognise the principle of copyright and IP and make sure we embed it in all the decisions relating to the creative industries around regulation of this new space.

Q366       Tracey Crouch: Do you think that amendments to the Copyright Act would require primary or secondary legislation?

Paul Fleming: That is a technical question.

Q367       Tracey Crouch: I do not know whether Jamie can help.

Paul Fleming: He will be better able to answer that. I do not have a clue.

Jamie Njoku-Goodwin: In terms of technical amendments?

Q368       Tracey Crouch: It is obviously much easier if things can be amended through secondary legislation and, therefore, there should not be any delay, whereas primary legislation would require time within a legislative programme. I did not know whether you understood it to be primary or secondary legislation. Maybe the lawyers in the second panel can assist.

Jamie Njoku-Goodwin: I will double-check and write to the Committee.

Q369       Chair: We will store that for our next panel.

Mr Njoku-Goodwin, you talked about the need for a commercial agreement to allow the scraping of data to be used. Why do you not propose one? Why do you not write one? You represent various parts of the industry, including labels. If you are frustrated about what is not there, why do you not put it on the table?

Jamie Njoku-Goodwin: To be clear, I am not frustrated that a commercial agreement is not there. It is not for us as an industry body to be proposing an overall commercial agreement.

Q370       Chair: Why not? Given that no one else is, why do you not step into the breach?

Jamie Njoku-Goodwin: Because it is a matter of individual rights. If you came to me now and said you would like to ingest my content into your AI, and asked Paul exactly the same, we might have very different views, prices and wants. I might be ardently against it; Paul might be quite happy to do so. You would have a commercial agreement. In the same way that you would not propose an overall commercial agreement at industry level for many other areas, I do not think it is for that commercial agreement to be set at industry level; that commercial agreement needs to be between the specific company or individual organisation that wants to be using your content and the individual rights holder, rather than it being set at Government level. I know there has been talk about things like compulsory licensing, which the industry is absolutely set against because it should not be for Government or for us as an organisation to say what those rates should be. It should be a commercial agreement between the individual rights holder or their representative or representatives and the person or organisation seeking access to that content.

Kevin Brennan: It might be through collective management rights organisations, not each individual rights holder.

Q371       Chair: You have talked about the pace of change. It is just over three months since the Committee has been taking oral evidence and a lot has been developing during that time. You are aware that Elon Musk and various others have called for a pause specifically on the training of AI. That has caused much debate. What is your view? Do you think the signatories to that letter are right, Mr Fleming?

Paul Fleming: I am not going to overstretch myself and say whether they are or are not. I am not on the tech side. I do not know what the potential of this is.

What I do know is that the real impact on our members in terms of job losses is there and is happening now. I also know that the game sector—I keep picking on the game sector because economically it is a very obvious sector that uses AI at its very heart—is expanding rapidly without a collective bargaining framework and strong industrial relations framework to protect peoples rights and their income. That is what I know is happening, and those are things that I think need quite urgent attention.

Q372       Chair: Mr Njoku-Goodwin, what do you think about Musks proposal?

Jamie Njoku-Goodwin: Transparency is absolutely key. One of the things that scares me most about AI is what I do not know.

Q373       Chair: Do we press the pause button?

Jamie Njoku-Goodwin: I do not think it is as simple as just pressing the pause button. I go back to the example of the internet. Looking back 20 years, if people had said we should press pause on the internet we would say that was absolutely mad. Look at all the fantastic societal developments end contributions that have come from the internet.

I do not think it is a case of just pressing the pause button; it is a case of being very clear about where the benefits and potential opportunities lie and what we can be doing to utilise them, but also being quite honest and up front about the risks and challenges. There are some spaces where we probably need to be taking things slower. I do not think it is a one size fits all, that we should stop using this technology or throw it out altogether, or just go hell for leather and give everyone the opportunity to do whatever they want with it. It should be quite carefully considered, but we should also be respectful of creators rights and seize the opportunities, as well as deal with the challenges that this new technology poses.

Paul Fleming: As a final point, it is perhaps naive to think that you can push a pause button in a globalised world on the development of technology. What you can do is ask: what is the outcome of this? The outcome of the impact of AI will be: where do you want to create art and entertainment? We can ensure that the UK is the place where people want to create art and entertainment by ensuring that rights holders are well protected. We want a light touch, but a clear regulatory regime, focusing not on the technology itself but on the rights holders and those moral rights that we have understood to be the core of the way this country has operated for centuries and those things are respected. That will make this place the place where people want to come and use AI developed elsewhere, because they will have confidence that their rights are respected. That is a phenomenal opportunity and that is when the pause button should be pushed.

Chair: You have both been very clear in advancing that case. We are very grateful for your evidence this morning.

Examination of witnesses

Witnesses: Coran Darling and Dr Bosher.

Q374       Chair: Our next witnesses to join us at the table are Dr Hayleigh Bosher, who is a senior lecturer in intellectual property law at Brunel University in London and a visiting research fellow at the Centre for Intellectual Property Policy and Management, who has written extensively in this area. Coran Darling is a technology lawyer with the law firm DLA Piper and a member of the data ethics group of the Alan Turing Institute.

You have been good enough to listen to the first session so you have heard what has been on the minds of the representatives of some of the creative industries.

Dr Bosher, will you briefly describe the current intellectual property protections that pertain to AI? In so doing, perhaps you would say what you think the debate should focus on and whether they should be extended or added to.

Dr Bosher: That is an enormous question. I will try to give some useful information rather than maybe all the information.

We have already touched on some areas of the law that relate to AI activity, but there are two things—copyright and intellectual property rights—that apply to the creative industries and creative works. It is a principle that gives rights to creators and rights holders to enable them to be remunerated for their work and control the use of their work. I think that came across in the previous panel.

The AI sector will also benefit from copyright and intellectual property rights. That is something to bear in mind. Reference was made during the previous panel to the monetisation of AI-generated works. The way they would monetise it would be that it is protected as an intellectual property right and the database they build could have database rights as well.

The main thing to remember is the principle of copyright to protect the works of creators. What copyright is supposed to do is balance the interests of the people involved. That means the creators, disseminators, the platforms, the public and libraries; it is everybody with an interest in both the creation and dissemination of knowledge and culture, and that includes both the tech industry and content industry.

Earlier reference was made to the age of the CDPA. I am the same age as the Copyright, Designs and Patents Act, although much of it is copied and pasted over from the 1952 Act, so parts are much older than me. I think that some of it applies directlyfor example, to AIassisted created works under the computer-generated sectionbut for much of it we are trying to shoehorn AI into law that was written with humans in mind.

Q375       Chair: Mr Darling, you pointed out that there are two areas of the law that seem to be particularly engaged by AI: the right to make copies of a work, which is sometimes referred to as the reproduction right; and the right to prepare derivative works, the transformation right. Would you brief the Committee on why those two principles are particularly pertinent to our discussions?

Coran Darling: Thank you for inviting me.

Those are particularly important because they are rights afforded to creators by virtue of copyright provisions.

Q376       Chair: Existing copyright provisions?

Coran Darling: Exactly. Those rights are somewhat inhibited by the fact that when you are using copyright work it will form the data that will then either be trained or used within the model and formed as an output from the AI. Those two rights are infringed by virtue of what is coming on the other side of things.

If it is identical, the right to reproduce should have been the right of the copyright holder in the first place, whereas the right to create a derivative work, while materially similar, is distinguishable enough. For example, if you have an image-generative AI programme and it takes an image that is copyrighted, it churns it through the system and outputs something that is materially similar. That would be a derivative, at least according to my understanding, of that original copyright work. That would be one of the rights infringed there. That is why when you are putting in these copyrighted works under the current scheme those are the two biggest ones that are most obviously subject to infringement.

Q377       Chair: Would that not potentially imply that existing copyright law is adequate? It includes specifically a right that allows content owners to protect their works from being reproduced, and indeed from being transformed. Why is that not adequate for the purpose?

Coran Darling: It is an interesting question. As Dr Bosher said, the copyright regime currently functions to an extent. What would be most beneficial is clarification of the positions, especially when it comes to the new tools like generative AI and similar models.

Q378       Chair: Who should clarify that? Is it not the courts that clarify the law?

Coran Darling: It would depend on the approach taken by the UK jurisdiction. There are some instances where the courts will make a decision and it will go via the jurisprudence route, or regulators could take it upon themselves, especially with the openness of the AI Act, which presumably we will be coming to later. They have the opportunity to create policy and guidance to help streamline the process in the way we want it to go. We can use the existing materials, with the hardened legislation currently in existence, or we can use that and supplement it with additional policy guidance and measures.

Q379       Chair: Do we know whether there is a problem? Has there been any litigation in which a rights holder has sought to litigate against the use of their material in an AI setting?

Coran Darling: A few cases are currently going through the courts both within the UK and various other jurisdictions. They focus on very specific and technical aspects of IP law. I can provide further information after this session.

One of the recurring themes is the infringement of copyright used by people who are training their models with databases that have been said to be not properly authorised through the copyright regime. That could be because they have not questioned the use of it through the original copyright holders, or it could be that they have some form of agreement that has not been held to the obligations within the agreement itself.

There are cases going through the courts within various jurisdictions. We are yet to see specific outcomes, but it is now getting to the stage where we are seeing them at the high end of the courts. I suspect that if decisions do not directly influence policy decisions throughout the jurisdictions in which they are involved, it will be a secondary consideration when developing it.

Q380       Kevin Brennan: Dr Bosher, you wrote a very interesting book, “Copyright in the Music Industry”, which has a chapter on AI. I hate to say that now it is almost out of date. What is your assessment of the gap that might exist in intellectual property and copyright protections as a result of the rapid development of generative AI?

Dr Bosher: That is a good question. I agree that that chapter is very much out of date. I hope the publisher is listening and will give me a new contract.

We have been thinking about this from an intellectual property perspective for some time now. It seems that there has been a flux in the progress of the technology in the past few years. Copyright is always supposed to evolve with new technology. Although we talked about the age of the Copyright Act, secondary legislation has been updated every single time we have had a new inventionthe photocopier, the camera or internet. We must always be re-evaluating copyright to see whether we need to reassess the rules in the new context.

In the context of AI, the gap between what the copyright law is saying and what AI activity is doing is a growing one. To refer back to the last question, part of the gap is also a lack of clarification. If it was not the case that you need a licence to reproduce or use a database in order to train your AI data, the Government would not have suggested that we needed a copyright exception in order to do that, because you need an exception from the rule if that is the rule.

From a legal perspective, you could definitely argue that, but there is some lack of clarity in the technical details of how it applies, so some of the gap between what the law says and how we apply it to AI is lack of clarification and we need to update the law to apply.

Q381       Kevin Brennan: If we take the analogy of the early days of downloading, as it was then—for example, The Pirate Bay, Napster and so onwhat happened in practice was that the technology was there, so players came along and developed it to enable them to put it out there and ignored the law for many years. It was very difficult to put that genie back in the bottle. It was only when the industry got its act together that it was able to produce a commercial form of that which was more attractive to customers. They were at least prepared to pay for it with some advertising around it, and it did not involve them in activities that funded criminal activity. Many years later, the industry caught up and streaming came along and disrupted all of that once again.

What would be your observation on this? Are we at a similar moment, or is this a much more challenging moment even for people who want to maintain the copyright framework for our creators?

Dr Bosher: I think this is a more urgent moment than that, in that it has progressed further. I do not think that new technology always necessarily needs new regulation. If you take 3D printing, for example, everyone was slightly worried at the time 3D printers were progressing that we might be at home printing our own brand of trainers. We are not doing that, so we did not need to regulate there.

What we are seeing with AI is that it is already impacting and undermining the ability of creators, specifically in the creative industry, to be fairly remunerated and to have their work acknowledged. We are already seeing the displacement of creative workers. It is very poignant that something needs to be done now, rather than, as in situations like 3D printing, seeing how the technology plays out and how we should regulate. We are past that point already. The tech industry is doing that full steam ahead.

Q382       Kevin Brennan: I think that Google quite recently did music AI, which, if you like, is the equivalent of ChatGPT, except that it did not release all the ability for you to use it. I have heard examples of that. You give it an instruction to produce a reggae track of a certain type and you get a strangely synthesised Bob Marley. Clearly, it has been trained on Bob Marley. I suppose that if you train AI on music and give reggae as the genre there will be a lot of Bob Marley in the end product. In your view, how is Google legally doing that if there is no agreement in existence across the industry? What are the legal implications of all that down the line?

Dr Bosher: It is an excellent question that perhaps you should put to Google. I do not know their licensing practices. When I see these examples out in the world of music that it created in the style of other artists, as a lawyer I am thinking that there must be some agreement, because I cannot see how otherwise that would be a viable thing for a company like Google to be doing, but I do not know.

Some of the grey areas in the law that I mentioned are that copyright does not protect a genre, it does not protect the sound of your voice currently, and it does not protect ideas. The technical way we explain it in copyright is that the individual expression of an idea is protected, but the idea is not. That goes back to the balance you mentioned earlier of Ed Sheeran, taking inspiration and what is actually copying. Certain AIgenerated music may currently be taking advantage of those grey areas. I would assume that companies like Google are probably licensing, but I could not say for sure.

There are other rights involved. Drake and The Weeknd are trademarks. Someone on the previous panel mentioned potential passing off if you are in the UK. Passing off can be a little problematic; it is not easily enforced and it does not always apply, and it would not always help the individual. Before AI, when we had cases like the one mentioned in the previous panel, there was misrepresentation. Therefore, if Drake and The Weeknd were in the UK, they might make a successful claim for passing off in that example, if there was not a licence in place. I assume there is, but I cannot be sure.

Q383       Kevin Brennan: In terms of passing off, that would be a substitute for the fact that there are no actual image rights in the same way as there are in other jurisdictions. Is that correct?

Dr Bosher: Exactly. I would not say that it is an adequate supplement, but it is grasping at any other option. If I was advising them as a lawyer, I would say, “We could try this.” As I said, it has specific criteria that might not apply. It is a very long and complicated common-law process. It would not be available as an option necessarily for an individual creator. Someone like Drake or The Weeknd might have the legal and financial capacity to pursue that, but it would also be quite high-risk litigation.

Q384       Kevin Brennan: No help to the little guys is what you are saying.

Dr Bosher: Definitely not.

Q385       Kevin Brennan: Can I just ask you one other thing? I am interested in your observations on an experiment that TikTok did recently in Australia. As I understand it, TikTok currently licenses with a blanket licence with the music industry for the uploading of short videos with music attached to them, like the famous Fleetwood Mac guy on the skateboard and so on. There is a general licence across that for TikTok. It conducted an experiment in Australia where it took off the licensed music on those videos and replaced it either with silence or with AI-generated music in order to see what impact that would have on engagement and therefore on advertising revenue for TikTok, possibly with a long-term plan not to pay any money to license music at all but possibly to completely replace it with AI-generated music or something similar. What are your observations of that from the broader perspective and your legal perspective?

Dr Bosher: My understanding is that it was not very successful, which is encouraging, I suppose, if we talk about the value of the human creation, going back to the question of why we would want to protect and encourage the creation of AI-generated works when creativity in the creative industries is about human connection and human creativity at its core. Going back to what I said earlier about the balancing act of copyright, it is not to say that you cannot do those things, but it is that we should protect and value more highly the human creator and human creativity. Clearly, that experiment has shown that it is true not only from a copyright perspective but from a public marketing social media perspective as well.

Kevin Brennan: I could continue that conversation for a long time, but I will not hog the time.

Q386       Tracey Crouch: Do you think the fair use principle is enough to protect AI-generated content against copyright infringement claims? Mr Darling, I will start with you.

Coran Darling: With fair use being something beyond the UK remit, it would not be something I would be best placed to discuss. The best way of protecting it is to see AI and AI-generative models as a tool and an extension of similar products that we are using now. To give you an example, in the production of TV, films and the like we are very used to having producers and various other rights holders have their rights protected through the copyright regime as it exists right now, even though part of the process in creating those final creative works is feeding them through technology, AI, sound and so on and so forth.

If we look at generative AI models or tools or similar things like that as a tool, it would be covered to a better extent under the copyright regime as it exists now. The big issue that stems from the question that was asked earlier is the understanding of the rights that are involved within those circumstances.

To give you an example, we are seeing within the market issues coming up of AI companies using, whether on purpose or accidentally, using copyrighted content to train their models. I believe that when these things have happened, it is normally due to not knowing or misunderstanding rather than malice. That very well may be the case, but I like to look at it optimistically, and that seems to be the case here.

People on both ends of the spectrum, the rights holders and the people who are developing these technologies, do not understand the environment enough within which they are working, particularly within the creative industries, which is what we have heard before, and that is why I am such a big proponent of developing understandings through sandboxes, reaching out to stakeholders and other forms of education, and giving the developers and the rights holders at both ends of the transaction the ability to understand what they need to do to be compliant with regulation, policy and guidance as well as the rights that they have as those parties when they are coming in in the first place. That is a better way of at least dealing with the situation we have now in protecting AI-generated work.

Q387       Tracey Crouch: Dr Bosher, do you have anything to add to that?

Dr Bosher: As mentioned, fair use is a concept of law that we do not have in the UK, but there is loads of academic research on people comparing whether it is better or worse. I do not think the grass is necessarily greener. It is also not one thing. The US one is the one that we refer to most often, but it is not the only country that has it. It is shaped and looks in different ways in the same way that we have fair dealing instead, and our fair dealing exceptions look different from those in other areas of Europe. We have our own copyright exception framework. We could talk about the text and data-mining exception in a minute, I am sure.

I am not an advocate for fair use as a solution for anything. We should work within our own copyright exception framework given the research I have seen comparing the two systems. It goes back to the balancing act. Our exceptions sit within our framework of copyright, and you have a bit of this here and a bit of that there. Fair use may allow for other types of activity in certain countries, whereas they may have a levy that compensates somewhere else, so it is all about the careful balance of copyright in that particular regime.

Perhaps the reason AI tech start-ups are not seeking licences is that they do not know copyright is a thing and they are unaware of the systems. I definitely think that is a possibility for at least some. I want to reiterate that that is why we need clarity on the law and the licensing regimes. Not knowing is not a defence for copyright infringement.

Q388       Tracey Crouch: That is an interesting point. I think the consumer also probably does not know that copyright is a thing. However, the streaming services and platforms certainly do, or certainly should. What role do you think that they should play in addressing the training of AI models?

Dr Bosher: The streaming services definitely know what copyright is; they are also copyright industries. They play a part in the enforcement aspect. If we clarify and enhance some of the legal protection around creators rights, the notice and takedown procedures and things need to line up, and there need to be adequate procedures to be able to take down infringing content. In other countries such as in Europe there is the copyright directive that puts more of an onus on platforms to moderate content before it is uploaded, which could be an option to explore. My view is that that would be something we would need to look into about where the onus falls. Basically, there needs to be a working relationship between the rights holders, clear communication about what the rights are and how they can be enforced, and collaboration with the streaming platforms in enforcing those rights.

Tracey Crouch: Thank you.

Q389       Kevin Brennan: Dr Bosher, do you think that the Government and the IPO will come back with some updated proposals on the previous exemptions that were put forward last year? How might they differ from what was originally proposed and withdrawn in February?

Dr Bosher: I do not know exactly what the IPO and the Government are going to do. They have explicitly said that they would take the feedback that they received about the proposed text and data-mining exception. At one point, it was explicitly said that it would not be implemented, but since then we have had the AI White Paper, which put the emphasis on pro-innovation and pro-tech, and maybe that concerned the creative industries that it means a re-emergence of the text and data-mining exception. If it comes back, there is no evidence to suggest that they would come back with the same proposal that they had before given the responses that they had.

There were 88 responses. Only a handful of those opted for option 4, which was the broadest option. Of the people who opted for that option, it was not the tech industry; it was research libraries and archives. They have specific intentions around what they need that exception to do, and I do not think that there is any pushback on that.

Q390       Kevin Brennan: Is it a fundamental misunderstanding and myth to think that tech companies should be against this kind of regulation, and, if so, why?

Dr Bosher: That is a great question. Yes, I do think it is a myth for many reasons. The first that comes to mind is that the tech industry is also a copyright industry, and that is why in the consultation responses from IBM and Siemens they are not asking for a broad-scope copyright exception because that would also apply to their data and their copyright assets, so they understand the value of copyright. They also want working relationships with, as far as I am concerned, the creative industries or the other industries that they are working with to problem-solve together. I do not think that it is very productive to be against copyright as a tech industry. As we mentioned earlier, the misunderstanding probably comes more from the individual start-ups and entrepreneurs who are not at that point of understanding that they are a copyright industry yet.

Q391       Kevin Brennan: Regarding the race that Tracey Crouch talked about earlier, there is a view, is there not, out there that we need to be laissez-faire in this in order to allow innovation to occur? The race probably ought to be in the direction of fair and good-quality regulation in order to allow that race to be run and not become—sorry to mix the metaphors—a wild west where nobody ultimately benefits.

Dr Bosher: Absolutely. The tech industries, if they are serious and sensible with their action plan of what they are trying to achieve, will adhere to copyright regulation and other relevant regulation around data privacy, because, ultimately, if they want to trade and market their business they will need to be able to do that in jurisdictions that have legal rules. If I was a tech entrepreneur, collaborating and working together with the tech industry is what I would be thinking rather than necessarily branding copyright as a barrier, which I think is an extremely outdated view of what copyright is supposed to do.

Q392       Kevin Brennan: Mr Darling, did you see Sir Patrick Vallances recommendation on generative AI in his emerging technologies review, and what was your view about it?

Coran Darling: Sir Patrick Vallance had a lot of very interesting points. One of the things that we keep bringing up and one of the things that I am a big proponent of is working with stakeholders to create a code or practical framework in which people can see clearly what things apply to them and what things do not, and how they should be moving. Building on that, what may be useful later down the line, especially given that regulators have a bit of flexibility within the UK in comparison to the EU approach and given that the White Paper has allowed a lot of things to be distributed to regulator control, is that it is, to use your metaphor, certainly a race, but we are in the position where we are far enough ahead that we can set the course out and direct it that way. His recommendation of having some form of clarification and code would definitely be a helpful one.

Within his recommendations, he also brings up the data-mining exception. Given the conversations that we have had, especially with Dr Bosher, it is an interesting concept, and I can see why it would be beneficial, especially in instances of things like medical development. There are regimes that currently exist in terms of licensing—this is what was said in the previous panel—that will allow us to at least manage this situation without a data exception because we have the rules set in place.

If it is the case that you want to have this form of exception, bringing control back to the rights holders in the first place will always be one of the best positions, mainly because people come in on a playing field in which they are willing to work. The tech companies will come in thinking, “Okay, there is definitely something for me to gain from having this work.” The creators and the rights holders will find a way of having value as opposed to just having that taken away from them.

There is the right to choose in some instances within free, open-source software. I suspect you could do something similar with creative works in this context. There are definitely recommendations that align with my opinions. Some of them feel more directed towards one particular industry as opposed to all.

Q393       Kevin Brennan: Finally, Dr Bosher, what might a code look like, and what should happen if there is no agreement on a code? Would legislation be necessary and desirable in that case?

Dr Bosher: What might a code look like? A code of practice is a good idea. It also enables the management of all the risks that AI poses, because within that code of practice we could agree the shapes of certain licences being able to protect against other risks. In that sense, copyright can be a catalyst for managing some of the other issues. It can also ensure that it is not only the rights holders who are considered in this but also the creators, who are not always the same person. That is something that we should always bear in mind, especially if we think about having new rights around image rights and things that we have mentioned. We would have to think about who gets those rights, how those are managed, and whether they are able to be waived.

Q394       Kevin Brennan: You may be aware that something I proposed in my private Members Bill was an unwaivable right in certain instances. Is that what you are hinting at around image rights and so on?

Dr Bosher: I am saying that we should definitely consider that and look carefully rather than just assume that the image rights go to the rights holder of the copyright. We need to be more careful in that decision rather than just make that presumption. A code of practice is very useful, but it needs to be based on regulation. For example, we currently do not have image rights. It is a missed opportunity to only grant it through a code of practice for the creative industries because it can also apply elsewhere across social media. To grant an image right could be beneficial in general, and then from that legislative foundation build a code of conduct on top.

Kevin Brennan: You might even say it is a fundamental human right in some ways.

Q395       Aaron Bell: Dr Bosher, we talked about what might happen next, but could we go back to the proposal the IPO put forward last summer? What would have been the consequences if we had adopted that, and why do you think it proposed it in the first place?

Dr Bosher: Just to clarify, it put five options on the table.

Q396       Aaron Bell: Sorry, the most extreme option and the most supportive of AI, as it describes it—the full text and data mining.

Dr Bosher: Exactly. When it did the consultation, just for the benefit of everyone, it put all the options like “do nothing” all the way up to “it is a free-for-all”, the free-for-all being that you cannot opt out and that you can use text and data mining for any purpose. The first consequence of introducing an exception like that would, quite frankly, be that it is against international trade agreement requirements, so the Government might have found themselves—

Aaron Bell: In front of the WTO.

Dr Bosher: Yes, defending against a claim of a breach of the Berne convention. It would also potentially have shown a short-term increase in the use of copyright databases for the use of ingesting and AI processing. But it is short-sighted given all the things I have mentioned already about the collaboration between the tech and creative industries and the things that were mentioned in the previous panel about the fact that AI-generated works are only able to do that off the back of human creative works; and if it then undermines human creative works, eventually it would just be a narrowing echo chamber of robot-created stuff, which is not very attractive.

Q397       Aaron Bell: Have any other countries gone down a similar laissez-faire route other than by not doing anything, which is obviously potentially laissez-faire, and, if so, what consequences have we seen in any other countries?

Dr Bosher: That kind of consultation is going on around the world at national and international level. The World Intellectual Property Organisation is also looking into this. Australia and Canada are also considering proposals. The one that is probably the closest to the broadest option of the IPOs proposal is Japan. It is too early to see the full impact of that.

Q398       Aaron Bell: What have they said in Japan?

Dr Bosher: They expanded their text and data-mining exception to include “for any commercial purposes”. We already have one that says you can use it for non-commercial purposes. They had the same, and then they expanded it to say it can include commercial purposes as well.

Q399       Aaron Bell: With data, did they include music and art within that?

Dr Bosher: That is something that is not specifically clarified.

Q400       Aaron Bell: Right, okay. I will turn to you, Mr Darling, about something we did not really discuss in the first session because of the witnesses we had. It was about actual images. Do you know why Getty Images has initiated legal proceedings against Stability AI? Are there other legal proceedings in this area that could set important precedents?

Coran Darling: I can check that later after the session. One of the main things that I have been able to pull out in my reading has been the element of inadvertent use of unauthorised content. From the sounds of things, it looks as if the claim is that copyrighted media has been used in the development, training and outputs of the work that comes out of Stability AI, and that could be seen. If it is the case—and we will see as it progresses both in the UK and in the US—and an image is taken from a company such as Getty and it goes through and then it comes out, it goes back to the discussion that we had earlier on the right to redistribute and the right to create derivative works.

The full extent of what this is going through remains to be seen, and we can look into this a bit more, but that would be my understanding at least initially on that basis. I know from looking outside the field that there are similar cases going through the courts. Whether or not it is the main claim, there is a claim that copyrighted material has found its way into databases that have been used to train AI and has then infringed various rights of rights holders based on the outputs that are coming from it.

Q401       Aaron Bell: The CEO of Getty Images basically analogised the situation with Napster and Spotify, which was the discussion we had towards the end of the last session as well. Do you think that is a reasonable analogythat at the moment you have some Napster-like companies out there that are just moving fast, breaking things and not looking at the law or worrying too much about the fact that they may be breaking the law, and eventually we are going to come up with something that is more regulated?

Coran Darling: I would not be able to say anything specific on the Getty one just because I have not looked into it enough to give you an accurate breakdown. Whenever it comes to these instances, I go back to the thing I said before: in most cases, I like to think of the fact that something has not been done because someone did not know about something or they have forgotten to do something as opposed to someone doing it maliciously. It is undeniable that there will definitely be companies that are like that.

There is the idea that comes from tech companies where they move fast and break things, but, as we have already heard, especially from the likes of Dr Bosher, there is an incentive for the tech companies and the rights holders to work together. It is in everyones interest for them to work and collaborate and create this model, so I suspect in a lot of instances it may be the case of doing it accidentally, but I would not be able to say.

Q402       Aaron Bell: Project forward a bit. Do you think we are heading towards a negotiated settlement as a way of resolving these sorts of disputes rather than necessarily needing new laws, or is it a bit of both perhaps?

Coran Darling: I would not be able to say based on the current one. In future, having clarity on rights and procedures and understandings of how to go through this, whether or not they become laws or whether or not they are best practices, will make things a lot easier, especially if we use the example of the streaming platforms having that protocol of, “I have been issued a rights infringement notice. What is my protocol for addressing it?” Those sorts of things will definitely make it a lot more straightforward and easier to address. Once we have the initial insight from the court, we may be able to understand it a lot better and address it based on that.

Q403       Aaron Bell: Dr Bosher, do you think negotiated settlements are a likely route through the current gaps that we have, or is it going to require updates to the law as well?

Dr Bosher: Going back to what you just said, there will always be people who move fast and break things. There are also people and organisations who do not agree with copyright as a principle. I genuinely believe that they do not understand sometimes, as with the example of Grimes recently announcing on social media to kill copyright, use her voice for AI, and give her 50% of the royalties. That is a misunderstanding of copyright because you cannot get 50% royalties if you do not have copyright. There is an element there. That is why we need legislation and to be able to enforce that legislation where we cannot necessarily convince everybody that copyright is a good idea.

At the same time, we will see settlements, but that might largely be because of the expense and arduous process of going through the courts, especially in America, which is where one of these cases is taking place. It is extremely expensive, it takes years, and people are encouraged to settle out of court in many cases around copyright and intellectual property rights. It is not necessarily because the law is inadequate; it just might be a financial decision. If they can agree a licence, that would be a better long-term solution, but it depends on the intent of the person bringing the case. My view of the Getty Images case is that it is a test case. It is trying to get the courts to confirm what we have been saying, because there is a lack of clarity in the law.

Q404       Aaron Bell: Any settlement is likely to be among the big players, essentially, which is roughly what happened with Spotify, and speaks to some of the concerns we heard from Mr Fleming and Mr Njoku-Goodwin in the first session that it is the smaller producers that are ultimately going to lose out regardless of the moral rights and wrongs of it. Is that a realistic outcome of the current flux that we are going through—that it will be more awards for the labels and the big tech companies and the really high-profile artists, and less for those who are further down the curve?

Dr Bosher: Absolutely. That would be my concern. Although the settlements might be great for the big players, the people who are not privy to those agreements obviously do not benefit from it. That is where the law can come in and scoop them up by providing rights for those people.

Q405       Aaron Bell: Do you want to add anything, Mr Darling?

Coran Darling: I think that is the correct way of looking at it. Having clarity as much as possible will help the people who are either less learned or do not have the same capital to go and push bigger negotiations or cases.

Aaron Bell: Thank you.

Q406       Stephen Metcalfe: Good morning. I just want to talk a little bit to wrap up about the Government White Paper, if I may, on AI. It said they want to keep the right balance between protecting rights holders and our thriving creative industries while supporting AI developers to access the data they need. Do you think that is a realistic aim? Can that be achieved?

Dr Bosher: Yes.

Q407       Stephen Metcalfe: Yes?

Dr Bosher: Yes, really.

Q408       Stephen Metcalfe: Okay. How?

Dr Bosher: This goes back to what I said earlier: copyright always does that. It does not necessarily always do it perfectly. There are other areas we could discuss where it could be improved to do that. Copyright is exactly the mechanism that you use in order to balance stakeholder interests. I think this is a perfect example of a situation where we are balancing the interests of the tech industry and the creative industry, and copyright can be a tool for that.

Q409       Stephen Metcalfe: Do you think that model can work across other sectors as well?

Dr Bosher: Applicable sectors?

Stephen Metcalfe: Yes.

Dr Bosher: Yes. Copyright does not apply to everything. Yes, for literary, artistic, music, dramatic works, sound recordings and films. Other intellectual property rights have a different foundational justification, so the balance would look different.

Stephen Metcalfe: Okay.

Coran Darling: On the same point, from a practical perspective, if you have regimes that are giving some form of compensation or remuneration or attribution to the creative works and data and things that go into models that tech companies are using, people will be more inclined to help, because even if it is not going under revenue sharing or if it is going under something like attribution they are getting the credit. You will find that, just looking at it from a practical perspective, people will be more inclined to work together, collaborate and help out even if it does not mean in every circumstance that they will be remunerated.

Q410       Stephen Metcalfe: Thank you. The use of AI across all creative industries is not going to diminish; it is going to increase. We need the right framework in place to get that balance right, as you have talked about, between creativity and copyright. Would you care to speculate about what the law might look like in 10 years? We will not hold you to it. Is there a particular recommendation that you would like to make to the Government in our report that might act as a stepping-stone to get to a vision that we could share in 10 years?

Dr Bosher: That is a tough question. I can answer what I would like to see rather than what I think will always happen, because, sadly, I will not necessarily be the person writing the law in 10 years. I would like to see the law in 10 years evolve to the point of fairly balancing these interests to enable the creative industries to continue to thrive while also supporting the tech and AI industry in a way that upholds and values human creativity and protects the individual creators. We should seriously consider expanding copyright to encapsulate image rights.

I also agree about implementing the Beijing Treaty on Audiovisual Performances and extending performance rights not only because of AI but also because of the general society that we live in now. Going back to our points earlier about how old copyright law is and how analogue it is, and the fact that we have law from 1952 that applies to machines that are not used any more, there is a strong argument for an overhaul of copyright altogether, although I take the point made earlier about secondary legislation being probably the immediate result because it is much quicker. In 10 years, we might have had time to do more of a thorough review.

Q411       Stephen Metcalfe: Thank you. Do you have anything to add to that, Mr Darling?

Coran Darling: In a similar vein, I would not be able to speculate on what it will look like. What we have now is a unique opportunity where we have not gone down the same vein as the EU and other jurisdictions. We have given the power and flexibility to regulators to distinguish their best practices and approach it on that basis. That gives us the opportunity to develop regulation of AI and copyright creative industries from the ground up within whatever it touches. That could be relying on their internal interpretation of the principles that are within the White Paper. It can also be utilising stuff that we already have in existence such as the ISO and BSI, which are dealing with very specific technical standards, and could bring in the technical measures and protections to the models that are going in, while the policy and guidance, and the regulator light touches or heavy touches—whichever way you look at it—will then deal with the companies specifically that are using these tools and using these rights.

I would definitely like to see a lot more collaboration and interaction, and making use of the tools that we have in existence with policy and with standards. Given that we have taken the non-linear approach to regulation of AI, we have an opportunity to go in with a scalpel rather than a hammer and make things as we need to. That is what I would like to see moving forward. It will require a lot of collaboration and uniformity to make sure we address grey areas. Having something like the regulation forum DRCF but maybe across all the sectors which it touches on will be very helpful. On top of that, it may be the case that having a regulator or overseer of those distinct sector regulators may be helpful.

Again, I would not be able to speculate on things like budget and whatnot—that would obviously be a consideration—but having that would be a step in the right direction because it would bring together technical and savvy people with their own expertise, which then, going back to what we said before, will set the course for the race in terms of AI and development.

Stephen Metcalfe: Thank you very much.

Chair: Thank you, Stephen. Thank you to both witnesses, Mr Darling and Dr Bosher. For those watching who are interested in the legal insights, Dr Bosher has a podcast called “Whose Song is It Anyway?”, which is available on all good streaming platforms. Can I thank both of our witnesses and our two witness earlier for a very enlightening session on AI and the creative industries in particular? That concludes this meeting of the Committee.