Culture, Media and Sport Committee
Oral evidence: British film and high-end television 2, HC 328
Wednesday 11 December 2024
Ordered by the House of Commons to be published on 11 December 2024.
Members present: Dame Caroline Dinenage (Chair); Mr Bayo Alaba, Mr James Frith, Dr Rupa Huq, Natasha Irons, Tom Rutland, Paul Waugh.
Questions 1 - 33
Witnesses
I: Martin Adams, Co-Founder, Metaphysic; Benjamin Field, Executive Producer, Deep Fusion Films; and Nick Lynes, Co-Chief Executive, Flawless.
II: Liam Budd, Industrial Official for Recorded Media, Equity UK; Dr Mathilde Pavis, Consultant; and Ed Newton-Rex, CEO, Fairly Trained.
Witnesses: Martin Adams, Benjamin Field and Nick Lynes.
Q1 Chair: Welcome to this meeting of the Culture, Media and Sport Committee. Anyone who has been to see “Wicked” at the cinema will know the frustration of ending on a cliffhanger and the excitement and the promise of part two. That is exactly what we have given viewers through our Select Committee’s inquiry into British film and high-end TV, because we started before the general election and there was a slight interruption, but I am delighted that we are now back with the sequel.
We will start by looking at one of the recurring themes not just for film and high-end telly but across the Committee’s work: artificial intelligence. Yesterday, the Secretary of State tried to reassure us that the Government are united on harnessing the potential of AI while equally supporting our fast-growing creative industries. Of course, it is important not to overstate the distinction between these two sectors. The truth is that AI developers need creative content to train their models, and that creative industries can use AI to innovate and to grow.
We are exploring that with our first panel today. All use generative AI technologies to make film and TV. Today, we are hearing from Benjamin Field, executive producer at Deep Fusion Films; from Nick Lynes, co-CEO and co-founder of Flawless; and joining us online virtually, Martin Adams, co-founder of Metaphysic.
Before we begin, I remind members of the Committee to declare interests at the point of asking their questions. As a starting point, could I ask you all to briefly introduce yourselves and tell us a bit about how your business is using AI?
Benjamin Field: We set Deep Fusion up to sit at the intersection between technology—specifically AI—and traditional filmmaking techniques. We then used the policy information that I have been co-writing alongside industry bodies like PACT, Equity and the Writers’ Guild of Great Britain to inform the choices that we made to make sure that the technology we offered as part of our creative output was ethical, legal and responsible.
Nick Lynes: We are providing some of the products, I imagine, that will be used from the AI side. We have developed three products that are, again, responsible, using ethical data and so on.
The first product does dialogue changing, so that you can change the dialogue without having to go back to set. That means big cost reductions and more creative options. We also have another product that is killing dubbing and subtitling, which allows for an authentic translation where we match the lips to the foreign language. We also have a product that protects artistic rights.
Martin Adams: I am Martin Adams from Metaphysic.ai. We specialise in the use of AI to power photo-realistic content. We are always working with consent and permission datasets. We can scan anything in the real world—it could be an item or an environment or a room, or a voice or a face—and then power that up into a digital feed. That is of a lot of value to busy actors who may not be able to be on set for every location shoot or for filmmakers who try to push the boundaries of innovation of what you can do with a normal traditional camera or production setup. We work across big movies, branded content, live events and so on.
I want to say thank you for having all of us here today. Hopefully, we can close a bit of a knowledge gap about an area that is oft talked about but often misunderstood as well.
Q2 Chair: Thank you for joining us. Martin, you may be able to hear in the background that you are competing with a tractor demonstration that is happening outside our window. I will have to ask you to speak up as loudly as you can. We will play with the sound here as well, but you are competing with the “William Tell Overture” as performed by a passing tractor—
Martin Adams: Excellent. Good to know.
Chair: That is not a phrase I ever thought I would hear myself say. Welcome to you all. Thank you for joining us.
Ben, can I start with you? What do you think are the biggest opportunities for artificial intelligence to stimulate growth in film and high-end TV?
Benjamin Field: Do you mean technology-wise or more broadly?
Chair: More broadly.
Benjamin Field: I went to the USA at the beginning of May to try to sell formats and show ideas to US networks and studios. I discovered that the US studios were nervous to the point of not doing anything with AI. When we put forward how we were working, and said that we were working with PACT and the industry bodies and that there was interest pan-network, if you like, we discovered that the US industries suddenly went, “If you make it in the UK, you can export it to us. A step is removed and therefore, it is okay.” The idea of being able to legislate effectively to make what we do in the UK defined as legal means that the UK creative sector has a huge export opportunity at the moment. That is a broad answer to something that I could be quite specific about.
Q3 Chair: That is helpful and useful for setting the scene. Martin, your company worked on “Here”, a movie that was shot in the UK. How important do you think that AI will be, following Ben’s point, to maintaining the UK’s competitive advantage as a destination for production?
Martin Adams: It is absolutely crucial. We know that we have the production facilities, and that they are well regarded and impressive. We know also that the UK has done a decent job of attracting technology talent and the innovation talent that can work on building innovative and valuable AI algorithms. For a filmmaker like Robert Zemeckis and the cast—Tom Hanks, Robin Wright and so on—the combination of those two things made “Here” a film that was possible to shoot in the first place, and the UK an exciting and maybe the only valid place that they could come and do it on the budget that they had. This was a Hollywood movie, but it was not at Hollywood’s huge budgets. That combination of innovation and the more traditional reputation that the UK has for production and post-production is essential.
Q4 Chair: Nick, what do you think is needed? Where is the competition coming from? What is needed for the industry to grow further and maintain that competitive advantage? What do you need the Government and, more broadly, the sector to do?
Nick Lynes: The UK film rebate is a very successful initiative. We will be looking at quite a serious augmentation of the filmmaking process as we go through the next few years. It will happen very quickly. We will start to see parallelisation of processes—it has been a linear process traditionally. We will start to see a lot of different tools coming in. There is confusion between post-production and production—it is already a completely blurred line, to be honest.
The opportunity is for us to fully start to understand what that new filmmaking process is and be able to start to capture more elements of this new way of making film within that rebate. That is a great opportunity, because that will draw more filmmakers to the UK. There are a lot of opportunities. I could keep talking, but that is one good example.
Q5 Chair: Will the VFX tax reliefs that were recently announced be enough to maximise the potential?
Nick Lynes: It is basically what you wrap within that tax relief. Ultimately, the wording was designed around a linear filmmaking process with a traditional understanding of filmmaking. Certainly from my side, I am willing to work with you to help to understand the direction that things have already moved to, which maybe are not even anticipated. What will happen in the next couple of years? The outcome of how filmmaking will be done in the next few years is almost certain. Wrapping that around the new processes will draw a massive amount of people here.
I will give you a simple example. Using localisation is an opportunity. It is not dubbing and subtitling; it is visual dubbing. It gives audiences the ability to see a movie in their language in a digestible way. It means that you can now sell that movie in multiple languages for the price of the original language movie. If you can get that wrapped into your UK rebate, for example, you will be drawing a load more film production companies over here to take advantage of that. You will get the whole production just because you have bolted on a couple of extra things.
It sounds self-serving that I am delivering that piece of information, but I believe there is an opportunity and around maybe 10 or 15 other opportunities like that. It boils down to making sure that you understand how films will be made in the future, with the augmentation of some virtual done responsibly and some physical done in the same way that it is traditionally done.
Benjamin Field: The tools that are made and can be sold are absolutely an opportunity. The roadblock to selling those tools is about the training data of them. It is a clarification of where the training data that went into building those tools actually came from and whether those are licensed or unlicensed. That is it. Out of all the conversations I have with networks, studios, panels wherever, filmmakers and production companies, the central conversation always comes back to, “Can I use it without screwing over our industry, by using software that is trained legally and licensed from the filmmakers who created the data in the first place?” That is the primary thing.
Chair: The ethical considerations. Okay.
Nick Lynes: Yes, I endorse that. My co-founder is a writer, producer and director. From day one, I knew from the commercial side that you were not going to be able to steal data and then try to sell it back to the same people. Under artistic rights protection, you cannot steal people’s performances or other forms of art and then sell it back to them.
Doing things responsibly is simple but it is not easy. It takes a lot more money and it takes a lot more time, but it is perfectly possible to do it. All our products sit on clean data.
With the final product that we developed by doing that, we realised that there was a commercial opportunity in building products that show audit trails and consent flows to make sure that there is an ability to prove that there is clean data within your products. We have had great successes. We are speaking to the same people that you are talking about in the US and in the UK because you can go in and say, “These are all the huge benefits. You are bringing costs down, which will increase the amount of production overall, but we can do it with this clean data. Here is the proof.” If we did something synthetic, we got consent for doing that—there were consent flows within these products and audit trails to show where the data came from for the training.
Chair: That is interesting.
Martin Adams: We are maybe all saying on some level that good ethics is good business in this space. It is important when we are talking about AI, and thinking about how we can both stimulate its use and protect against nefarious uses, to distinguish the business models and the applications that are covered in the news—the OpenAI and so on, which are open platforms, and it is difficult to exert any control over the usage that people might have. It is not part of their business model.
However, when we talk about film and high-end television, we are much closer to that end use. We are much closer to the datasets. We are closer to the actors and the storytellers. Often, there is a lot of human in the loop and still in the process of these technologies. That helps to prevent against powerful technologies being used for pornography or for non-consensual users. That is often missed in the debate.
Chair: That is helpful; thank you. I will move to James now, please.
Q6 Mr James Frith: Thank you, Chair. I should declare an interest: my wife is a jobbing actor and has done voiceover work as well, which she will be pleased to have had an advert for here, I am sure.
It is impressive to see you wrestling with the real issues and—by your word, Nick—not screwing over the sector that you are working within. I do not know if you caught any of the conversation we had with the Secretary of State yesterday, but I want to pick up on a point Ben made about the export opportunity helping to navigate the laws that exist in the US. Exporting work that you do here helps them to get around laws that they presumably cannot navigate without importing work that we do here. Is that a fair take of the point you made? Is that the opportunity?
Benjamin Field: It is not necessarily about circumventing laws; it is about working creatively to get around a problem. The problem in the US is that the reaction to the SAG-AFTRA strikes, the writers’ strikes and all the rest of it means that any time a studio suggests using AI, there is an immediate backlash. However, if it is made in the UK, with the guidelines we have set up through PACT and Equity and any number of the others that you would care to mention, those are aligned with how to make everything work. That primary problem of whether you are using software that is ethical and legal still comes back to being the main bugbear.
As it stands, we were able to convince one US network to enter into an AI development deal, which we believe is the first and only AI development TV deal inside the US, because of the work that has been done by industry bodies in the UK. That is not about circumventing laws; it is about changing perceptions.
Q7 Mr James Frith: Within the conversation that we are having about AI, we sense that the US and Europe have been faster in getting their laws together. The Secretary of State’s view was that it was too blunt an instrument in some instances in Europe. Are we slightly behind the curve on this?
Benjamin Field: We are behind the curve.
Q8 Mr James Frith: Do you hope that we will keep the divergence between our laws and the US state of play, and that if we seek to protect the workforce that the US laws do, we need to have different guidance when—
Benjamin Field: No, we can protect the workforce and protect copyright at the same time. They are not mutually exclusive, and we are not looking to take advantage of our workforce at all.
The EU AI Act places the risk on the creative industries from AI as low and, frankly, that is ridiculous. As the second largest export business in the UK, the threat to AI and AI workforces is huge, and it could decimate our industry. We need to legislate to ensure that the practices that we can employ are legal and ethical. That is our point of differentiation, so that we can say, “We have laws that set out exactly what is legal. Therefore, we can export that and you can buy it from us knowing that it is legal, safe and responsible”, and it enhances our workforce and does not trample over everybody’s rights.
The industry is crying out for us to unlock that. To put this in perspective, I probably do one to two panels a week—they are fairly significant. This afternoon, I am speaking in Marrakesh to the world science congress. I am not going—I am doing it online.
Mr James Frith: You had better get a move on if you are going.
Benjamin Field: Yes, especially after I missed my opportunity to get the right train this morning. Anyway, we look at those as our main points of differentiation. A massive opportunity lies there because, as soon as we can unlock this and say, “This is what we say is legal. Therefore, everything we export is safe”, people can buy it from us. A massive market is waiting to do that.
Q9 Mr James Frith: That is great. Nick, you talked about unintendeds and wanting to work with us. As we develop the conversation, it feels very much like you are trying to be the honest broker, but also slightly reporting from the frontline on the developments—both the good stuff that we should seize and the opportunity that we will come to around growth, and the losers and the winners.
What are the other unintendeds? You talked about the synching of the actors’ mouth movements with the sound of the language. Is that the matching of the visual, or is it also the replacement—hence I declared an interest—of the narrator as well?
Nick Lynes: Yes. I knew what you were probably getting at when you declared your interest there. We use voice actors.
The truth of synthetic audio right now for the full performance range is that it is not there anyway, and a question should remain over whether it should ever be implemented. It should not come from the Government. Copyright has legal precedent, to reference what was previously said. A number of court cases are going on at the moment all over the world where copyright decisions will be made. Ben is having conversations with those studios over in the US or in the UK. Those distributors are cautious right now because no precedent says what the situation is around copyright and LLMs or large datasets—or any dataset, to be honest with you—that generates something using AI.
Once that starts to become real and understood in legal precedent, it gives us an opportunity to be able to navigate the world a bit better. Until we can get to that point, which is a matter of time, unless you can prove that you have clean data in your system, people will not be willing to use your systems and so on. It will have to be a self-serve, ethical, clean data approach for a while.
Chair: On the issue of voice dubbing, we will go over to Rupa.
Dr Rupa Huq: Yes, I was going to ask about the use of AI in voice dubbing. How do you see post-production changing with AI voice dubbing? There was always dubbing of foreign language films, for example.
Nick Lynes: Currently, we will use the same actors. They will do the same performances and that audio will be driven through our system, which then changes the lip sync so that the audiences will be able to enjoy the content in what looks like a native language experience. That is currently how things are done.
Of course, from the fringes, there are a lot of synthetic audio companies out there. The question that we are not necessarily answering ourselves, because we are developing the software—and if you like, you can put synthetic content or synthetic audio into our system—will be: if you use synthetic, how are the people who created the datasets that were used to create that synthetic voice being compensated? Did they give consent for that use?
Various different territories around the world have quite specific laws coming into effect that say that you cannot do a blanket recording of somebody without them knowing the specifics of what it will be used for. I suspect that those laws will start to leach in as well—where you cannot say or do one performance, or even go back into a library and take people’s historical performances and use those to generate performances in perpetuity without giving the right credit and the right remuneration to the people who created those performances.
Chair: Ben, do you have something to add?
Benjamin Field: Yes. When we look at what the future might look like, legislation that works around informed consent is particularly important. Companies are set up that take people’s voices from historical data and will use those datasets to drive future performances from current voice actors, and all the rest of it, through AI. That argument about whether informed consent should be built into any digital likeness needs to be clarified and legislated for.
It is not right that any one of us could take a phone call and not know who is on the other end of that, so we say hello and try to work out who it is, but actually they are scraping our data off that telephone and are then selling it to a third party, and there is no chain through that. The act of scraping somebody’s vocal likeness is not legislated for at the moment.
There is lots of talk about whether David Attenborough will have his voice on natural history programmes forever. For “Virtually Parkinson”, did we steal Michael Parkinson’s voice? No, we did not. We licensed it—for clarity. The point is that I have licensed it because I am an ethical producer. That is not to say, at the moment, that there is any reason why anybody else should do it the same way as I did it. You would like to think that we all do things responsibly, but of course nothing says that we have to do it that way.
In the States, you have the publicity likeness laws, or whatever they are called, and that works quite well—if it were not quite so disparate. For instance, I could do a digital likeness at the moment of John Lennon, because he died in 1982 in New York and the digital likeness laws last 40 years posthumously in New York. However, in California, which is where Marilyn Monroe died, they last 80 years, so I cannot do a digital likeness of Marilyn Monroe. In the UK, you can do anything. That does not necessarily feel right. Sorry—I will shut up now.
Chair: Martin was going to come in.
Martin Adams: Yes; thank you. We have talked about audio. I want to emphasise as well that we can do this with face likeness. On some level we do that when we allow a 60-something-year-old Tom Hanks to be aged down to a 19-year-old man and then aged up to an 85-year-old man throughout the course of a film, in the case of “Here”.
The trend here is that less and less data is required to be able to control and manipulate that important visual data. Our likeness is inherently important and sacred to us. In the long term, that provides exciting innovation opportunities. I could be a huge fan of “Star Trek”, and you can scan my data off what is publicly available of me online and then I can be an extra in a film about “Star Trek”, and I am excited. Wow—amazing.
However, the question is who owns that data. We have established this morning that data is what powers the creative opportunities and the innovation, and other opportunities here. Who owns that data all the way down to an individual level is an important question that has not been clearly resolved. I am sure that the social media companies will want to claim that they inherently own it, because maybe you are pulling data usage of their platforms to create those creative and immersive opportunities.
That area is related to film and high-end television or certainly will be, and we should look to take a leading position and regulate with some aggression.
Q10 Dr Rupa Huq: On the foreign language point, there was always dubbing. Actors always did Sylvester Stallone’s voice or whatever. At the same time, when you go to the cinema in France, you can see it in version française or version originale. To what extent could the immersive way you are talking about, which captures emotion and the whole lot, threaten the English language as the lingua franca and as a big global export, if it is so easy to switch on a thing?
Nick Lynes: I have had the privilege of working all over the world, and I know a lot of people who say that they learnt English as a result of watching English language movies with subtitles. There will not be a huge impact on that side.
In reality, the impact will be hugely positive, because imagine that until now, in many countries, dubbing and subtitling has been a barrier for people enjoying stories from all over the world. We have an opportunity here. Story is a human thing, but language is a regional thing. If we can have ways that we can now consume other people’s stories from all around the world, we will get to educate ourselves on other people’s culture. The proxy is that we are watching interesting and exciting stories, but ultimately we are being educated in other cultures. At the moment, we mostly watch only stories from our own cultures. This is an opportunity, because in reality, we will be exposed to a lot more exciting stories from all around the world and will probably be better for it.
Benjamin Field: I have some real-world examples for you. We have been having conversations recently about what technology could do to create regional languages, if you like, of pre-existing content. The tests that have been done show that they are an uplift to the economy and to businesses in the UK, and are not a downturn. We have been able to identify that that is an increase in opportunity for the UK economy, and not a bad thing.
Chair: Rupa, we will have to move on to the next question.
Q11 Mr Bayo Alaba: Thank you, Chair. Good morning, Martin, Ben and Nick. I want to talk to you about skills. Specifically, Benjamin, how easy or hard is it to recruit people into the industry with regards to AI tools in your field?
Benjamin Field: This area is waking up. The National Film and TV School has been in touch with us to set up a new course that will give training and opportunities to the next generation of filmmakers. You can see that it is quite early doors on all of this.
We have hired two individuals to help our creative output. Their sole purpose is to understand software, AI and Python. They are building tech that allows us to produce new programmes that we could not have done 12 months or even six months ago. There is this blend, and a merging of the IT sector and the media sector, in a way that is advancing more than it had done perhaps in the last five to 10 years.
I suspect that over the next one to five years the next generation of filmmakers will grow up with the knowledge of, at the base level, how to use the AI tools, which will aid in the production process, but they will also have specialist skills within prompting per piece of software. Prompting is that idea of writing a text piece that says, “Can you make this scene brighter and look like it was lit from behind rather than from the front?” Each of these tools uses its own language to get the best out of them. Broadly, editors all understand how to construct a story, whether it is unscripted or scripted, but they have a set of tools that they prefer to use, whether it is Adobe Premier, Final Cut or Avid—the list goes on, but it is something that they work with better. You will end up with a division of prompt engineers who you can bring in in order to escalate. We are not quite there yet, but we can certainly see that that trend is beginning to come in. That will become an addition to the sector.
Q12 Mr Bayo Alaba: Thank you. I will quickly quote someone. Neil Hatton, the UK Screen Alliance CEO, said that “the education system was not teaching the right blend of skills that we needed in not just visual effects, but games, animation and the whole 3D-visualisation sector. Those skills were maths, physics, art and coding. It is a strange mix, but it is the right mix for us.” Is that your opinion? Do you concur? Do you disagree?
Benjamin Field: I will be annoying and ask you to read the quote again. Also, when was that said?
Mr Bayo Alaba: Of course. I do not have a date, I am afraid, but I will repeat it.
Benjamin Field: Are we talking about something quite recent, rather than something that was said 10 years ago?
Mr Bayo Alaba: It is quite recent, yes. He said that “the education system was not teaching the right blend of skills that we needed in not just visual effects, but games, animation and the whole 3D-visualisation sector. Those skills were maths, physics, art and coding. It is a strange mix, but it is the right mix for us.”
Benjamin Field: That has a lot of logic to it because it seems as if he is talking specifically about 3D animation and CGI techniques in post-production. I understand where he is coming from. With every iteration of technology, you need to update the education programme that goes around that. I can see where he is coming from, but I suspect that that is isolated in his experience and does not necessarily capture the broader appeal of everything.
Martin Adams: We have about 150 people all spread across the world who definitely reflect that quote in terms of being a deliberate, intentional cross-section of people—from actors, artists, creatives, storytellers to machine learning engineers, AI algorithm builders and so on. Our chief innovation officer is a Belgian national but came to Bournemouth University and studied at its National Centre for Computer Animation. We are in a decent place in terms of being able to have some of the skills that are necessary to thrive in this new economy. Britain traditionally might not have the political might or the economic might, but it has had the cultural might. It has had a good brand as a place to come if you want to combine technology and traditional culture.
However, the elephant in the room is that the salaries and the general packages that are offered to bright people like that by OpenAI, Google Gemini and so on are ludicrous. They are hard to compete with. I am British. I have lived all around the world. I have studied all around the world. We chose to build Metaphysic in Britain for the reasons that I mentioned.
We need help. We need to be able to continue to point to the cultural footprint that the UK has. I welcome all of the tax breaks around independent films and around VFX because they definitely help to continue that, but we also need to make it an easy place for the talent to come and to work and live here. Visas need to be easy, especially for those working in creative and ethical AI.
Nick Lynes: There is a distinction in this room between the people who are building these tools and the human resources they need and the people who are using the tools, but both of them represent the merging of technology, science and creative arts.
On the product development side, as Martin mentioned, at Flawless, there was this crazy journey of merging different cultures and different skillsets. We had AI scientists, ML engineers and pure researchers, mixing those cultures and that knowledge with engineering and product development, and then putting filmmakers in the mix—quite different groups, and those people were put together. As you say, it is extremely difficult to find people—particularly on the AI side and the science side—and we have people all over the world. We have offices in the US and offices in the UK. On the science side, it is difficult. On the engineering side, the UK is in an exceptional position. On the filmmaking side, it is in an exceptional position. We have a real opportunity there.
One thing for sure is that on the things you mentioned—you mentioned maths and physics, and maths and AI—scientists have had to learn a lot about how the creative world works to be able to make these products. On the production side, for the people who are using these tools, it is certain that the creators are having to become more technical.
It has been discussed that mixing the traditional filmmaking processes with the AI and synthetic filmmaking processes is happening, but we have to be careful about thinking that somehow this new community of filmmakers will emerge and will be able to make a film using gen AI like the traditional filmmaking industry and those incredible artists who have honed those skills over decades. I can say with absolute certainty that that new community will not be able to do that. It has to be the existing people from the existing industry with all those amazing skillsets who are using the tools. We believe in putting those powerful tools in the hands of those same creators and letting them augment themselves. It has to come from an existing industry augmenting itself and moving forward.
Chair: We need to speed up a bit.
Q13 Mr Bayo Alaba: My next question is for Nick and Martin. This is more around regional disparities and recruitment as well. Do you sense that it is harder to recruit from certain regions over and above others?
I will add another element to that question around career choice and career pathways, and trying to bring people into the industry to keep them and show them a pathway through. Do you feel you can predict and promote a long-term career path within the industry?
Nick Lynes: The reality is that, in our case, developing these products is so complicated that we need people in the same place a lot. London is our UK base, so we only recruit from around London because people cannot commute. I know there is a lot of fully remote work, but we cannot do what we do fully remotely.
The frame of reference I have is that, in the UK, we have been able to recruit well on the product engineering side and we have hired some scientists, but we also have an office in LA. The truth of the matter is that, in terms of the conversation we have been having about these brilliant people from all over the world, they generally navigate towards California. We have been successful in recruiting top AI scientists and people on that scientific side of things in California but less so in the UK.
Mr Bayo Alaba: Martin, do you want to add anything there?
Martin Adams: Yes. To distinguish, we are remote first and that means we hire essentially for talent rather than proximity to a location. That has allowed us to compete with OpenAI and Google, because people can come from an area with a lower cost of living. There is less friction for them to get into work and so on. We have not found a problem hiring from diverse areas and we actively encourage that.
On the question about a dedicated career path, that is a challenge. The speed at which things are moving and the cross-sectional and cross-discipline nature of film and AI combining that we talked about means that it is hard for someone to tread a linear career path. That reality will change the way from school up to university and beyond. We have to embrace that reality. It is more like a scatter graph of roughly relevant experience that you have, and then you draw a line of best fit throughout it rather than charting it proactively.
Generally, if you are an individual—if you are a young student—but also, if you are a filmmaker or an actor, these are exponential technologies. We do not try to win the future by accurately predicting it now. We pick a direction. We pick a belief that AI will be part of creativity. We generally try to get experience across the field, given that.
Chair: I want to try to bring this session to a conclusion shortly after 11 o’clock, so I ask you to all to be as concise with your answers as possible.
Q14 Paul Waugh: Thank you, Chair. I want to talk a bit about the ethical use of AI. We have all touched on this in different ways. Ben, I want to say that “Virtually Parkinson” is uncanny. It is extraordinary for anyone who listens to it. It is interesting that you said you have the licence, so you have permission. You have full buy-in from a national treasure and the use of that voice.
Given that we are talking about how we can best help British high-end film and TV, you have all touched on the fact that the legislation is not exactly in the right place, if at all, at the moment, and also whether Britain has any voluntary or industry standards. It is a balance. Ben said that good ethics is good business. Martin said that we need to legislate with some aggression.
Can I ask each of you in turn for your view of what we need to do in terms of both global industry standards and British legislation?
Benjamin Field: The industry standards are being laid out. PACT, Equity, the Writers’ Guild of Great Britain and others are bringing forward and publishing central pillars. Those sit around the central pillars of respecting human creativity in all its forms, be that future, present or past. That covers copyright, and it covers informed consent—those sit very strongly.
We then look at bias and making sure that producers feel that there is a responsibility to not just use tools. We are looking predominantly within this area at generative AI—so generative images, generative video, generative voice. We do not perpetuate inherent bias that is built up into systems because AI software is built on historical training data and, therefore, we know that bias will probably be inherent within those systems somewhere and that there is personal responsibility within that.
I am currently engaged in early discussions with the heads of BAFTA to generate what we hope to be a certification scheme similar to the albert calculator. There is no legislation. That becomes a broader issue because it encapsulates social media use and all the rest of it, but within the broader commercial broadcast sector, our guidelines around our certification scheme mean that if you want a programme to land on one of our UK channels, it cannot use X, Y and Z or it must have informed consent. The industry is doing those things. We know that the albert calculator took a little while to get off the ground. For two or three years, it was voluntary and then it became compulsory for the BBC and others. If you want your programme on there, you must have that at the end of it.
The worry with AI is that the amount of damage that could be done by setting a precedent within that, if we roll it out at the same scale, means that we need to legislate fast. The media sector or the broadcast sector has clearly agreed within itself what those guidelines need to be. I am happy to share those with you at another time separately, but that then needs to be legislated for.
Paul Waugh: That is helpful.
Nick Lynes: The main topics are around copyright, consent and compensation. Distributors are currently, to some degree, regulating the industry through their fear of not understanding exactly where copyright law will land. Things are being throttled appropriately, which is the same objective you will probably be looking for with legislation. The guilds and unions in the creative industries are appropriately powerful and also in some way govern how people operate.
At the moment, we need to be careful about jumping in too quickly, bearing in mind that this is coming from the people who proactively wanted to define ethics and the correct protections for artistic rights. I am not trying to push back. I am saying that it is dangerous if we jump in too quickly to something that is moving so fast at the moment and we do not do it appropriately. At the moment, at least from what I am seeing, the industry is well protected from the mechanics that I have described. We are going through it gradually. We should not do it aggressively. We should not rush into it. We should have a regular dialogue that basically gets both sides of the table to understand the situation. If we can do that, the timing will become clear when we need to do something.
Benjamin Field: I have a natural response to this. The industry bodies that have been engaged in this conversation have not been doing this as a kneejerk reaction. It has taken 18 months to two years to get to the point where we all agree on the central pillars. To suggest that this is an aggressive kneejerk is not accurate. We are therefore looking for legislation to back up what the sector is saying. That is a natural progression of things. It is not kneejerk—I thought that was important to say.
Paul Waugh: We will let you two take that up later. Can I bring in Martin?
Nick Lynes: For clarity, I did not say kneejerk. I said that we do not need to move too quickly on it.
Paul Waugh: I get that. Martin?
Martin Adams: Yes, thank you. It was me who talked about aggression; I stirred the pot there. The aggression comes on the obvious things that we know we should legislate on and that maybe people do not have their eyes on, which is the individual data for people like you and me—not necessarily the big Hollywood or British film stars, but the individuals whose data could be used to power up in the way that those film stars currently are. That is where we should have aggression.
The areas where we should have specificity are the broader areas that we are talking about around film and high-end television. The EU approach is driven by abstract concepts of AI and harms. That is the wrong approach. The US approach that we are likely to see with the incoming Administration is the opposite—totally hands off, like an ostrich with its head in the sand.
The UK has an opportunity with what we have talked about today. It has the filmmakers and the creatives who use and build day by day this technology. We have an opportunity to be specific in legislation.
On the basic principles, my background is as an intellectual property and data privacy lawyer, and I met my co-founder at Metaphysic at Harvard Law School—we are both lawyers. We started with some clear guide rails, which were no politics and always using informed consent. Those are some pretty fundamental and uncontroversial pillars that we could go with.
Economic rights-wise, if we are dealing with an actor whose face has been scanned or whose voice has been scanned, and we are now going to use that in a production context, I think what SAG-AFTRA established was pretty decent and pretty reasonable, which was that we would pay them for a reshoot, for example, on the basis of them having been there. This technology is not all about driving the economics of performance down. It is about overcoming the real challenges of logistics, scheduling and getting people on shoot. So it allows things, rather than automating and driving the price down. We should look at also that key principle.
Paul Waugh: That is fantastic. Thank you.
Q15 Tom Rutland: Good morning. I will ask a little bit about audiences. I should disclose an interest that I am a former official for Prospect and Bectu trade unions, which represent workers in the creative arts. Could you be brief with your answers, if possible?
Do audiences have a problem with the use of generative AI in programme making and should there be a legal requirement of disclosure?
Nick Lynes: I do not know. Audiences do not in our case have an issue from what we can see, because of the localisation—they are enjoying content. No one likes dubbing and subtitling, or at least it is not the ideal form of localisation. People are quite pleased to be able to consume things differently.
I am pausing on whether there should be a need to disclose. I lean generally towards, yes, it should be disclosed, but generally also the industry will work that out for itself. The people who do things ethically and those brands will start to represent trust marks and people will want to disclose it.
It comes back to this point, and again, I am genuinely not implying a kneejerk reaction or that we are jumping into things, but if we have a situation where the SAG-AFTRA agreement that was made at the end of last year, which we had some conversations about behind the scenes—I want to understand the difference between legislation and these powerful guilds and unions that are saying, “This is the way that we want everyone to operate.” We have to work within those guidelines anyway and we do not need to rush into things.
Yes, there probably is some benefit in it being disclosed, but I believe the industry will work that out for itself.
Benjamin Field: Yes, my gut instinct is similar. You could look at the initial reaction to “Gerry Anderson: A Life Uncharted”, which used deepfake. When that was announced, the reaction was pretty horrific. I was called every name under the sun. I was engaging in “digital meat puppetry”, which I thought was a wonderful phrase and I wanted it on a t-shirt. I did not get one. But it was particularly useful that when it came out, the audience who saw it said, “You announced that it was AI and we see the purpose of it and we accept it.” Conversely, if you look at the Coca-Cola advert that used AI, the audience did not really care. The industry went, “Oh my God, you have all used AI”, but the audience did not care. There are levels of nuance and the industry ought to work that out. It should not be legislated for.
Q16 Tom Rutland: Martin, do you have a view? You talked about having a no politics rule. Would certain types of content such as current affairs or political broadcasts benefit?
Martin Adams: Sorry, to clarify the question—would benefit in in which way?
Tom Rutland: From having disclosure. Can you talk a little about your no politics rule, as well as your view on the former question, if that is all right?
Martin Adams: Yes, absolutely. For example, there was the Anthony Bourdain documentary about his life where they used AI, there was no disclosure and they had him essentially saying things that maybe he did not say or were vague. Those instances are short-term wins for the filmmaker. They get their film out there and maybe even get a press cycle, but they very much undermine the sustainability of the contribution of AI in filmmaking that we are talking about here.
Generally, I agree with the other panellists. I welcome any watermarking or any disclosure standard. When it comes to film, it is a transaction where someone pays money and/or gives their attention to be in a state of disbelief and to lose themselves in a story. It is certainly more difficult for some art forms and entertainment forms than it is others.
However, when it comes to—this bridges to your second question—anywhere where we are essentially trying to represent the current state of a situation, through a documentary, coverage of politics, a “Panorama” show or whatever it might be, in all those instances, frankly, we should have pre, during and post disclosure. That will give audiences the trust and transparency that means AI can be used for the right things and we do not get whipped up and talk about misleading people.
Nick Lynes: Can I make a quick point? AI has been used in many creative industries for decades. We need to acknowledge an understanding that there have been huge breakthroughs in AI. That is the reason for this panel. The reality is that, in some form or another, AI has been used for a long time already. Understand that this is not entirely new.
Watermarking is interesting, because you can watermark indelibly and invisibly. It is perfectly possible to do that. Adobe is making real grounds on that CDPA policy and being able to do that indelible watermarking. It is all starting to happen, but it is being provided by all the different technology companies. Everyone seems to want to endorse that as well—at least the people who are doing it legitimately.
Benjamin Field: I have a quick point of reference for you on that.
Chair: Quickly.
Benjamin Field: If you look at “Crimewatch” as a perfect example, we have been through this issue before with dramatic reconstruction. Initially during “Crimewatch” episodes, you would always see “dramatic reconstruction” emblazoned on the screen at the time. However, in recent years, that has been dropped on the basis that everybody now knows what a dramatic reconstruction looks like. That is an industry thing that has been accepted. It has not come from the outside in.
Chair: You are allowed off. That was a good comment. Natasha to bring us in, please.
Q17 Natasha Irons: Quickly because we are running out of time, I declare that my husband is a voiceover artist.
It is comforting to hear discussions around consent, that the industry is talking about that and that it is front and centre. Are some of the solutions to how we ensure consent is enforced in the technology itself? Can technology provide that comfort in terms of monitoring?
Nick Lynes: Yes. What a coincidences to have two people on the panel who are working in voiceover, but I am glad that we made some decisions a few years ago around ethical use of data. We have literally built consent flows into our technology. In the example where a new line of dialogue has been provided, it will get put through our system. That will then get sent out through an app and the talent on the other side will be able to check and have a look at how it looks. They can press a button that says, “I consent to this.” If they want to, they can record a line of dialogue and send it back. That is one example of how there will be myriad different consent flows.
This goes back to the parallelisation of production and post-production, because you are back on the set, but you are doing it through technology. We at least—and hopefully other people as well—will build all sorts of versions like that to enable the continued use of these technologies.
Q18 Natasha Irons: I have a quick follow-up question. It is interesting that the TV and film industry is trying to find solutions around this. Social media was mentioned earlier. Once you have set these standards and you are working ethically, will the challenge be how to ensure that other providers in the media space do the same? Otherwise, you will produce content that is ethically made on one hand, but on the other hand is not.
Benjamin Field: That is why I welcome this conversation so much. As filmmakers and industry professionals, we can influence only so much. We can influence only our own sector. The social media sphere is completely different.
Any programmes that are made commercially and then find their way online—if they are commercial first, as in somebody has paid for them and then they are distributed online—at the point they go online, they are made with the standards that you would make commercial television to. There is then a discrepancy. If somebody makes a programme using all the AI tools at their disposal that they can use at their own computer, will they follow the same guidelines? The answer to that is, no, absolutely, they will not.
The internet has always been there for people to break the mould and a lot of innovation has come out of that, but that is why we as an industry uphold standards. We are here saying that this has wider implications, not least of all because the TV sector is in crisis. We are now in this position of a perfect storm. The whole expression of “digital first” has come in, which means that we are all exploring how to monetise social media content. Actually, we are sliding into this place where our programmes will go out on socials first, so who will uphold the standards that we have set ourselves? Ofcom is not in a position to look after social media at the moment, despite the fact that that is now under its remit. It is not in a position to do it.
We need to look at how we then influence and how we can stop these things from sliding. The answer is sat with you guys here asking the questions you are asking.
Martin Adams: There is a piece here around regulation and trying to have common industry standards, but we are also seeing the start of a maturing product space. We have said that if you want to put Tom Hanks in a shoot and he cannot be there, we know that that needs to use his data. There is an emerging product space—Metaphysic is in this space with our pro product—where you can have anyone scan, manage and then provide specific instances in which a production company or a studio can use that data to power him up into a shoot or to manipulate his appearance. That can be used to police non-permissioned uses and non-permissioned deepfakes. It is not simply about regulation but also about having that space for the product and the data protection.
Chair: Thank you. I am sorry that we are a bit pushed for time. We are staring down the barrel of Prime Minister’s questions. If we did not get to anything or there is any further evidence you want to share with us, please pass it over, because we want to make sure we have captured all your thoughts in our report. Thank you very much Ben, Nick and Martin for joining us today.
Witnesses: Liam Budd, Ed Newton-Rex and Dr Mathilde Pavis.
Q19 Chair: Thank you to our second panel for joining us. Many are concerned about the impact of irresponsible AI development on copyright and creators’ livelihoods. Our second panel will talk about how the Government can protect the rights of our creators and copyright holders.
We are delighted to hear from Dr Mathilde Pavis, who is a legal consultant; Liam Budd, who is the industrial officer for recorded media at Equity; and joining us remotely, Ed Newton-Rex, who is the CEO of Fairly Trained. Thank you all so much for joining us today. We have a hard stop of 11.50 am in time for Prime Minister’s questions, so I encourage you all to be pithy and entertaining in your answers. I will go first to Paul for the questions.
Paul Waugh: Thanks very much for coming. Ed, first to you, you released the statement on AI training data in October that was supported by many leading figures in the creative industry, from Björn from ABBA to Thom Yorke and many others. Why did you feel that statement was needed, and has it achieved what you hoped it would?
Ed Newton-Rex: Generative AI can be a powerful tool for creativity but sadly right now, as is commonly known, the majority of gen AI companies unfairly exploit the life’s work of the world’s creators. They use that to train models that compete with those creators. This is clearly illegal in the UK. We can talk about that. It will probably be judged to be illegal in the US as well.
AI companies tell us regularly that the scraping that they do is good for creators. This statement was hoping to show that, actually, creators do not think it is good for them. It has more than 36,000 signatures at this stage. It is a simple statement rejecting unlicensed training as a major unjust threat. We had a bunch of new signatories in the last week or so, including Stephen Fry, Hugh Bonneville, Miriam Margolyes and other actors.
I hope it shows the Government that changing the law to allow training on copyrighted work without a licence, which the FT has reported insiders are saying is the Government’s preferred outcome—I do not know whether it is—is totally unacceptable to Britain’s incredibly important and rightly respected creators.
Q20 Paul Waugh: In your opinion, what legislative routes would be better than that?
Ed Newton-Rex: The alternative to scraping training data is licensing training data, and licensing is absolutely the right approach. It is already done by many companies, but lots of AI companies also scrape training data.
As I say, in the UK, it is currently the law that you have to license training data for generative AI. Some in Government, probably unintentionally, have recently been saying that copyright law has some uncertainty. This is not the case, though. We can maybe talk about this further, but copyright law has no uncertainty in the UK. It is clear that AI training has no text data-mining exception. Lawyers do not think it does. Even AI companies do not think it does. Google recently made a statement saying that the UK should enable TDM for both commercial and research purposes, which is obviously a tacit admission that that is not currently the case. I do not know where the idea of uncertainty in UK copyright law is coming from.
Q21 Tom Rutland: Perhaps to Mathilde and Liam, how are the signatories to the statement’s concerns about the impact of generative AI on their livelihoods playing out in practice?
Liam Budd: We represent a broad range of members. They are actors, singers, dancers, supporting artists and variety performers. Many of our members have signed that statement.
It is important to distinguish between authors and copyright owners versus our members who have performers’ rights and will also be rights holders as well. Typically, they would assign or licence their rights to the producer or the copyright owner. They have shared interests. They have the same goal here in terms of protecting copyrights, but I suppose the impact on our members is even greater. They are vulnerable because we are talking about their image, voice or likeness, and that makes them vulnerable because this data is sensitive. They share their concerns, and they are aware that basically large-scale intellectual property theft has happened. Nobody in the creative industry is happy about this.
In terms of the impacts, various things are at play. The first is the displacement effect. Members are aware that this technology is replacing them and effectively they are competing in a marketplace with generative AI systems that have been trained on their creative works without their consent. Our members are at the hard face of this, particularly audio artists or supporting artists.
This also directly impacts their income. Their income is derived from licences or assignments that they create project by project. This is cutting off a potential income stream for a workforce that is incredibly insecure and often relies on those secondary payments. This also has big moral and ethical issues. Their privacy is being hijacked and a lot of moral issues are at play.
Dr Pavis: To go back to Ed’s point, UK law is pretty clear on mining. If it is done other than for non-commercial purposes, it is not allowed. You require consent. We do have less clarity, however, at the later stage in the AI process, when we have the generation of performances or works that look and sound like pre-existing protected work. What happens? Is that an infringement? Is it not? If it is a result of the tool having been trained on what we sometimes call dirty data, unlicensed and lawfully accessed data, it is likely to have carried that infringement through.
For performers, the real vulnerability comes from the fact that we have no protection in the UK at the moment against unauthorised digital imitations of people, which is essentially what AI generates when it synthesises a voice, face, a whole body or all of it and integrates it into an output. The reason for that is that our intellectual property framework, which is not copyright but performers’ rights for this particular point, did not have that application in mind for the technology.
Legislators did think about imitations in the past, but at the time those laws were introduced—in the 1960s, and in the 1980s internationally and 1988 for the UK—we thought that the most a performer could face is a tribute act or a soundalike or lookalike, not a big economic threat, nor a big moral threat, if anything maybe a compliment. Now the technology has changed that because, when you can be imitated on scale, it is a different game. Those are different economic and moral threats.
We expect the UK to have a system where your digital self and your physical self are equally protected, especially now that our digital lives are such a big part of our personal and professional work. Performers happen to be the canaries in the coalmine on that point.
Q22 Tom Rutland: Liam, how responsive do you find employers within the film and TV industry are to your efforts to negotiate new AI protections into contracts?
Liam Budd: We have collective agreements across film and TV with the major broadcasters, with PACT for film and TV, and also with distributing platforms. We are currently in negotiation to establish AI protections for film and TV contracts, which will eventually set the industry standard. We have been in rounds of negotiations for the last six months on this issue.
Both sides accept that we all want to achieve an ethical framework that is based on consent and on compensation, but we are not there yet. We hope in the new year that we will have a new set of frameworks that govern effectively how the industry operates.
Q23 Tom Rutland: Brilliant. Mathilde, in cases where performance contracts were signed before AI tools had been thought about, what can be done to prevent an earlier work being used to train AI models that might put the performer out of future work?
Dr Pavis: That is a good question. Contracts are the vulnerability point or the most strategic area where you can intervene because, if you think of rights as a big door that can protect you if you close it, the contracts are the hinges that can swing it wide open, if the person who has those rights does not enjoy the bargaining power to keep them.
The answer is that it depends. Some contracts will have been written only to allow a rights transfer from the performer, author, artist or creator to the producer or distributor for the purpose of the project. They are narrow and sharp, and they do that—they transfer the rights for that purpose only. If you are in that situation and the contract is narrowly written and interpreted, you are fairly well protected. It means that the people you have collaborated with did not acquire the rights to do more than that. That seems sensible and reasonable. We all believe in and want to work on that basis. This is what industrial agreements have in place.
The problem is that those contracts have scope creep and language creep, meaning that often the intention of the parties is, “Let’s do a film”, “Let’s do a series”, or, “Let’s collaborate on this recording album”, if you are in a music industry, but the text reads something like, as you will have seen if you have a social media account, “By virtue of this contract, you are transferring all rights in perpetuity for all applicable terms, technologies and uses known today or coming in the future.” Those broad buy-out clauses, as we call them, are becoming standard. They may have opened a back door to that statutory protection, which we want to see closed or we want to see unenforceable.
Q24 Tom Rutland: Interesting. Ed and Liam briefly, how far are creators likely to go to protect their rights? Will we see more strikes in the screen industries?
Ed Newton-Rex: Creators are organising a large and growing backlash to the widescale intellectual property theft that is happening in the generative AI industry. A large vocal group of creators honestly hate generated AI, and we know that a key reason for this is that they consider it to be stealing their work to compete with them. We have seen the SAG-AFTRA strike in the US. We have seen British artists say that they would be willing to strike over this. We have seen online protests leading to AI film screenings being cancelled and experiments in generative AI from the BBC being walked back. Honestly, if AI companies keep doing what they are doing, I expect the backlash to continue to grow.
Liam Budd: To echo that, the vast majority of performers are pessimistic about AI at the moment, given all of the intellectual property theft that is going on. Our negotiations could not be at a more important time. We are building on what SAG-AFTRA achieved in the US. When we survey our members, AI comes in as the top priority for these negotiations. There is a huge appetite for real change to modernise their contracts.
We have also put together an AI toolkit with a suite of advice and guidance that our members can use, and they have been using that to assert their rights. When it comes to GDPR or intellectual property, they have been using our advice to stand up for themselves and make sure their rights are protected.
Q25 Dr Rupa Huq: In a similar vein to our last panel, with specific reference to voiceover people and image people, I wondered, Liam, in terms of Equity, if this is such a precarious industry—we keep hearing even with Gregg Wallace that you have your presenters at the top and your short-term contract people—how unionised is it? How easy is it to get members if it is so short-term rather than a wage slave industry—even among the existing voiceover and dubbing actors and translators who will be put out of work by all this?
Liam Budd: We benefit from incredibly high density in Equity, which means that we can be a strong union to protect our members there. Particularly in areas outside film and TV—for example, audio or video games—it is an even more precarious sector because we do not have collectively bargained agreements. That is where there is an even greater risk.
Yes, we have been doing a lot of work to raise awareness around these issues. For example, we launched our campaign almost a year and a half or two years ago on this issue, because we know that audio artists have been incredibly engaged on this matter. We have been campaigning for strengthened intellectual property rights for performers, which has gained a lot of traction with audio artists.
Q26 Dr Rupa Huq: Mathilde, is the legal framework in the UK sufficiently robust to protect—
Dr Pavis: For performers specifically and voiceovers, the answer is no, primarily because it relies on a patchwork of different legislation. For your intellectual property rights, you go to performers’ rights and to the Copyright, Designs and Patents Act. To protect your voice or your face, you would have to default to the general data protection regulation, which was not designed for that. The two do not speak to each other well. Your third option is a contract, which is largely unregulated.
This is where the Government can have real intervention. You have two options. You go in like surgeons and you plug in the holes in those different areas of law, so you go in and update or introduce a new Bill to amend the intellectual property framework. You make it speak to GDPR and you look at contracts, making sure that the terms in contracts are not unenforceable because they are unfair. We have done that for consumers. We have done that for employees. We have even done that for commercial contracts in business-to-business transactions. It is possible. The next group that needs that are artists and creators because of the situation they find themselves in.
The other option is to go for a wholesale, more comprehensive framework and introduce likeness rights. They exist in other countries. You can learn from lessons there. You have the US and European doctrines that you can learn from. Importantly, whatever new rights you introduce, do not make them transferable. Keep them to the person they are designed to protect. Otherwise the distributor, the media companies and the platforms will acquire those rights, and they will be running the show and will not protect those people’s dignity, privacy and autonomy, which is why we are here.
Liam Budd: At that point, we need strong image rights. The island of Guernsey has an interesting framework where you can register your image, which we could look towards as inspiration.
Dr Rupa Huq: New laws. Bollywood will die if all this carries on to its logical conclusion. Anyway, thank you.
Q27 Natasha Irons: Thanks for coming in today. Following on from the previous panel, it sounds as if at least some in the industry are keen and want consent and copyright and to make sure people are compensated for their skills. The Government are about to consult on the interaction between copyright and training data. What could that consultation look like, and what should be with included within it?
Liam Budd: Inevitably, there will be a conversation around the opt-out. We say absolutely abandon that. That would be catastrophic for the industry and is not feasible. We can learn that from the EU.
Hopefully, that consultation will see specific questions around performers’ rights. That often gets lost in the conversation. People are focused on copyright owners and there has been not a lot of focus on performers’ rights, which are the neighbouring rights, and they are even more vulnerable. Even greater legislation change is needed there. I hope that there would be clear questions around the performer rights framework and how that can be strengthened to give the protections that they need.
Ed Newton-Rex: I want to echo that; I totally agree. I wanted to maybe dive into the opt-out point a little more because there are rumours that our Government might consider an opt-out scheme along the EU style.
Opt-out schemes for generative AI training quite literally do not work. There is no way of successfully opting your work out of training, given that legal copies of your work are all over the internet and you have no control over these legal copies. No opt-out scheme in existence can successfully opt out these downstream copies of your work. Opt-out in this sense is an illusion for rights holders. It gives rights holders no real control. This is probably the main reason that AI companies currently favour opt-out as a solution, as evidenced by the fact that techUK recently came out in support of opt-out.
Opt-out is unfair and unworkable for many more reasons, including the administrative burden, and the fact that most people miss the chance to opt out. Good data shows most people miss the chance to opt out when you give it to them. I thoroughly agree with Liam that opt-out should not be considered, and it will be seen as a mistake to have put that into the EU AI Act.
Dr Pavis: A strong reason, in my opinion, to perhaps exclude even the EU opt-out regime from the consultation is that it is likely in breach of the UK’s international obligations under intellectual property treaties. You have several conventions: the Berne convention, the TRIPS agreement, followed again by the Rome convention, the Beijing treaty and so on. They are trade agreements, part of the WTO package, that clearly outline that any exceptions or carve-out of creators’ copyright, including performers’ rights, cannot place an undue burden on the rights holder. It has to be limited to certain circumstances and cannot run into their commercial exploitation. That is called the three-step test.
That opt-out regime, if introduced, is likely to be contrary to that. Those treaties are binding on the UK. If there is one silver lining—I am surprised that I am going to say this—of Brexit, this might be it. The UK did not have to introduce it and missed it, and it is a blessing in disguise for the creative sector.
Q28 Natasha Irons: I have one follow-up question. It sounds even from the previous panel that everybody is keen for legislation to sort this out. Can you outline the downsides of an industry-led, more informal approach versus legislation, so that we understand where our intervention is necessary rather than letting the industry sort this out for itself?
Liam Budd: The film and TV industry, in particular, has a framework for collective bargaining, which means that the industry can sort it out itself between stakeholders, which is great, but that is not seen across the entertainment industry at all. In vast amounts of the sector, that is not in place. That means that industry collaboration has no framework, which means that you often see much more exploitative practices.
We also need the legislative framework to govern how the industry can operate going forward. We need the Government to incentivise generative AI models that source ethically and legally compliant data. If you get that bit right, we can crack on and get the industry running, and get things operating smoothly and effectively, but we need to ensure that the data that is sourced is legally compliant and ethical. Policymakers are vital there.
Ed Newton-Rex: Yes, I agree. I also wanted to mention when legislation is not clear. There is an interesting example in the US where legislation is pretty clear, but there is the fair use legal defence to copying. This perhaps leaves some room for debate in a given instance. When that is the case, you tend to see that AI companies are disincentivised from respecting copyright. They are disincentivised from licensing training data, because they are told by investors and by potential VCs in their companies, “You cannot waste time licensing. The winners in this space will be the people who forget about the regulation and just go and train on whatever they can gather.” Legislation is key here because, without legislation, AI companies will inevitably be disincentivised from licensing.
Q29 Mr Bayo Alaba: I want to explore a bit the intellectual property around generative AI. My question is to you, Mathilde: what should the UK Government learn from the legal models on policy alternatives being developed in other territories?
Dr Pavis: There are two types of models. You have the US, which has likeness rights combined with industrial representation that provides strong protection for actors and also copyright.
Other countries like France have the same intellectual property framework that we have. The only difference for them is that they have done two things: they have introduced strong and broad moral rights, and they have also regulated their contracts. In licences, they have a list of terms and requirements for an intellectual property contract to be implemented and complied with for that contract to be enforceable. That has injected best practice even in areas with no union representation. They have done that part quite well.
Both those jurisdictions, although different legal cultures, also have personality rights, image rights and so on. They had that principle-based approach from the get-go. However, the UK has a more pragmatic take—and always has had—of intervening only when there is a problem. We are at the crossroads. We have a problem. It is not a big one. It can be fixed pretty effectively, and then you can pass on the baton to industrial measures quite effectively.
Q30 Mr Bayo Alaba: Brilliant. I know we have touched on rights holders in the EU, but I want to make sure that I get this right. How are rights holders in the EU responding to the opt-out clause?
Dr Pavis: Very negatively. Studies that perhaps we could share with the Committee show that scraping online does not respect the communication of opt-out put in and communicated by rights holders or unions on behalf of rights holders. We are starting to see that empirical evidence coming through. We know that after the year and a bit that the regime has been enforced, it is already not working practically.
They also are raising the claim, or the point, that I mentioned earlier that this regime, which was thought up not in the context of AI but only later linked to AI, is now in breach of these international treaties, because it basically drives a train through core fundamental rights of copyright and performers’ rights.
Chair: I do not know if Ed wanted to come in.
Ed Newton-Rex: Thank you. I was going to totally concur with that and add that the statement on AI training is against unlicensed training. The EU AI act essentially permits unlicensed training with an opt-out. The Federation of Screenwriters in Europe, the European Publishers Council, the European Writers’ Council, the European Composer & Songwriter Alliance and News Media Europe, as well as individual rights holder organisations from Austria, Belgium, Denmark, Finland and Germany, have all signed our statement. The list goes on. Clearly, a lot of rights holders are not happy with the way that has landed.
Q31 Chair: Thanks. Can I conclude with you, then, Ed? If the UK wants to be a global leader in AI without damaging our creative industries, is that possible, and what do we need to do?
Ed Newton-Rex: It is absolutely possible. Some AI companies like to elide all of AI together and suggest that you need to deregulate all of it if you want any progress at all, but this simply is not true.
The major economic opportunity from AI does not come from exploiting the life’s work of the world’s creators without permission. If you look at the AI work that Sir Demis Hassabis won the Nobel Prize in Chemistry for, AlphaFold, it was not trained on creative work. Not a single important scientific discovery has come from AI trained on creative work. You train on creative work you haven't licensed if you want to replace the creative industries with AI without paying them—not if you want to cure cancer with AI.
We should invest in data centres. We should provide AI companies with access to these. We should invest in AI education and training. We should give grants and tax breaks to AI companies. We should encourage the best researchers to come here through visa schemes. We can be world leaders in AI for healthcare, defence, logistics and science.
As regards AI in the creative industries, finally, we can be the home of responsible AI development and responsible AI companies. We can do all of that. We can be a global leader in all of that without destroying our creative industries by upending copyright law.
Chair: Fantastic. Thank you. Do Liam or Mathilde have anything to add to that?
Liam Budd: Yes. We have not touched on a few things today. First, the Government should fulfil their obligation to implement the Beijing treaty, to extend moral rights to audiovisual performances and to ensure that those moral rights are unwaivable so that they are not waived in contracts. It is key to ensure that moral rights are extended but made unwaivable.
We need enforcement on GDPR laws. We can utilise some helpful legislation around GDPR and the use of performers’ personal data. We need clear enforcement of those rules and a clear understanding of how they relate to intellectual property.
We need legal certainty on the copyright framework and legal certainty on existing contracts, effectively ensuring that people are not using old contracts to get permission to then use data training or data scraping as well. We need legal certainty and the extension of rights.
Q32 Chair: You have quite a Christmas list there, Liam. Mathilde, what is on yours?
Dr Pavis: On mine would be to also be mindful of the cross-sectoral impact of AI. We are here from film and TV. This industry is cohesive. They have strong commercial relationships. They have mutual interests. If you just focus on bringing that industry clean models, you are good. They will do the rest. They will use it responsibly. They will do what is good for the ecosystem.
Other sectors, such as video games and advertising, are not in that position. A lot of independent filmmaking is funded because people in this sector take paying jobs in commercials and in video games. Those jobs are likely to be impacted potentially more severely because of the nature of the task and the absence of industry guidelines. That may have a knock-on impact on the talent available in film and high-end TV unless you come up with a system to track funding and incentivise the right behaviour.
Chair: Liam, you wanted to add something?
Liam Budd: I have a final point that a similar inquiry around the video games industry would be welcome. In the film and TV industry, we have a framework that we are looking to improve upon, but other areas need a deep dive. A video games inquiry would be particularly welcome.
Q33 Chair: Thanks. Ed, did you have anything final to add to the Committee before we let you go?
Ed Newton-Rex: I suppose the only thing I would add is that it is not just people like us who think about this. The general public, when you ask them these questions, agree at least on the training point that we have been talking about. A poll from the AI Policy Institute in April this year asked people about the common policy among AI companies of training on what they call publicly available data, which is people’s copyrighted life’s work that they have been told to put online because it is good for exposure. Some 60% of respondents said AI companies should not be allowed to train on this and only 19% said they should. When the same poll asked if AI companies should pay for their training data, 74% said yes and only 9% said no. It is important to remember that this is a commonly held view.
Chair: That was helpful. Thank you all for navigating us so deftly through this particularly tricky area. I hope that both the Department for Culture, Media and Sport and DSIT will reflect on the evidence from both your panel and the one we had before when they finish their consultation on this issue, because you have given us quite a nice road map for how we move forward. We will continue to monitor this situation closely. We want to see how the Government contend to support AI companies to grow and also to uphold creators’ rights and ensure that they are fully rewarded. We are grateful for your contribution to our inquiry today.