HoC 85mm(Green).tif

Business, Energy and Industrial Strategy Committee 

Oral evidence: Post-pandemic economic growth: UK labour markets, HC 306

Tuesday 15 November 2022

Ordered by the House of Commons to be published on 15 November 2022.

Watch the meeting 

Members present: Darren Jones (Chair); Tonia Antoniazzi; Alan Brown; Mark Jenkinson; Andy McDonald; Charlotte Nichols; Mark Pawsey; Alexander Stafford.

Questions 117 - 136

Witnesses

II: Laurence Turner, Head of Research and Policy, GMB; Anna Thomas, Co-Founder and Director, Institute for the Future of Work; Tania Bowers, Global Public Policy Director, Association of Professional Staffing Companies (APSCO); Carly Kind, Director, Ada Lovelace Institute.


Examination of witnesses

Witnesses: Laurence Turner, Anna Thomas, Tania Bowers and Carly Kind.

Chair: We are now going to move on to panel two, so we will be welcoming to the top table Laurence Turner, head of research and policy from the trade union GMB; Anna Thomas, cofounder and director of the Institute for the Future of Work; Tania Bowers, the global public policy director at the Association of Professional Staffing Companies; and Carly Kind, the director of the Ada Lovelace Institute. Good morning to all of you.

Tania Bowers, can I come to you first? Could you just give us an overview about what type of trends AI, algorithms and technology are having in the recruitment process for people trying to get jobs in the first place?

Tania Bowers: Thank you for giving APSCO the opportunity to give evidence today. As we have already heard this morning, the labour market must always be dynamic. It already is dynamic. Automation has been involved in the recruitment and hiring industry for decades. It is not a matter of years.

As we have also heard, the UK in particular has a very savvy consumer base, from both a business and an individual perspective. Individuals increasingly want to control their ability to look for, find and choose work. The way the labour market is evolving is allowing them that control and flexibility. That is at all levels of the labour market to a certain extent, although APSCO and APSCO Outsource focus on the highlyskilled roles in the labour market. They want to be able to look and search for roles in the same way that they may look or search for holidays or goods. However, we recognise that this is a much more significant issue of people’s wellbeing and jobs, and people filling roles in businesses.

The professional recruitment sectors have, as I said, used automation and AI in recruitment and outsourcing to support a faster, more effective hiring process. It is used in compliance and vetting checks, as well as minimising human subjectivity and bias when pooling candidates from various online sources or their own database. This assists recruiters to find and screen often scarce candidates more accurately without subjectivity biases to ensure businesses are offered the best talent pools.

Our members use technology for systemic time-consuming tasks such as processing payroll, processing timesheets, helping to collate and onboard compliance checks, etc. Increasingly our members use things such as AIpowered chatbots to ensure strong communication and feedback with candidates, heightening their experience, answering standard questions and notifying them of progress.

Machinelearning tools can expand beyond an individual’s existing skills and look for skills that can be scaled or augmented, such as recognising that a mathematician could be a data technician or a teacher could be a good HR director, when these systems work to match candidates to jobs and jobs to candidates.

To a lesser extent it is used in the interview process, which facilitates access to potentially hundreds more candidates for roles rather than a traditional very intensive, personheavy interviewing structure. To a certain extent, it has democratised recruitment and is continuing to do so. Fundamentally, we do not support gig platforms, etc. We support traditional professional staffing companies, but also outsource companies, which are increasingly a huge part of the labour market, that manage hiring and retention for their end clients, either through their own name or white labelled, and manage the contract process of agency workers and highly skilled contractors. People are still fundamental to our industry, to ensure strong communication, to help individuals find the right role for them, and to persuade candidates and businesses in terms of making that right match.

Q117       Chair: I can understand from an employer perspective why all these technologies are helpful. Recruiting is a time-consuming process, and it is quite hard to find the right people for the right job sometimes, but from the employee perspective—or the prospective employee perspective—what is the evidence about the employee end of being in the recruitment process through these technology platforms? It might be good for the employer, but is it good for the employee?

Anna Thomas: It can be, but it depends what the tool is, how it is used and how it is audited and assessed. Just to add a little bit to the evidence we have just heard, we have done research on the use of both AI in hiring, which is ongoing, and auditing tools. It can be used at all stages of hiring, so at sourcing stage, screening stage and selection stage.

Just to add a bit of colour to the examples we have already heard, it can be used to match workers to job advertisementsthe people who are most likely to engage with the advert based on the profile they have already had. It can be used to write job descriptions, head hunt people or message compositions. At the screening stage, gamesbased cognitive behavioural and personality tests can be used. There is generally a move from assessing actual performance and capabilities to the prediction of behaviours.

At the selection stage, at a later stage, background checks can be done, which can include the use of social media. That is a bit of background to it. The review we did of auditing tools did show up some worrying features; in particular, that the auditing tools were rarely explicit about the purpose of the audit or key definitions, including equality and fairness. Assumptions from the US were brought in.

The tools were generally not designed or equipped to address problems that had been found, which points very strongly to the need for this approach in auditing to be embedded in a much wider sociotechnical auditing of impact. From the employer perspective, those are the things that I would pull out.

Q118       Chair: Carly Kind, in addition to auditing these systems, is there anything else from a UK law or regulation perspective that is not keeping up with the use of technology for recruitment?

Carly Kind: Although the Ada Lovelace Institute has not done research on this, Robin Allen KC and Dee Masters released a report in which they felt quite clearly that UK law did not adequately cover discrimination and inequality risk that might arise in the workplace generally when it comes to the use of AI, so it does seem to be very much a gap.

There are some tools that, it is fair to say, sit at very much the outer edge, not only of legality but also of scientific veracitythings like emotion recognition or classification, which is when interviewees are asked to interview either with an automated interviewer or otherwise on screen. A form of image recognition is applied to them and tries to distil from their facial movements whether they are a reliable or trustworthy employee.

This type of recruitment software is being rolled out, certainly in the US, and we are hearing about instances here in the UK. This very much stands at the edge in terms of veracity, but also legality. For example, under GDPR, emotion classification is not captured under the definition of sensitive data because biometrics has only really thought about it in terms of identification, not this murkier form of classification or categorisation. That type of recruitment software falls in a legal gap and definitely raises concerns.

Chair: From my perspective that is horrifying.

Q119       Charlotte Nichols: Just to come back on that, I am very interested in these sorts of algorithms, particularly when it comes to things like behaviour prediction. For example, I assume that someone with a neurodiverse condition, including autism, ADHD and so on, might not perform as strongly on some of these behaviour and personality tests as people who are more readily able to predict the answer that the employer wants them to give, which is not necessarily what they think or feel about that question. Does a specific job of work need to be done, whether by the GEO or others, on the equality impact of these tools being spread out more widely across the labour market?

Carly Kind: It is a real concern with AI generally. Because the way AI works uses existing datasets to build predictions about the future, it tends to optimise for homogeneity. It optimises for the status quo. It is not very good at optimising for difference or diversity, for example. There is a very real risk that, if we integrate it into recruitment processes, it preferences candidates who have previously been successful.

In fact, Amazon in 2018 quite notably walked away from a recruitment AI it had started deploying because that AI was actively discriminating against women, for example downgrading CVs that included things like women’s chess club champion on it, because previous Amazon employees had primarily been male and the AI derived from that that successful employees look like men. That is a general problem with AI across the board.

Certainly in the context of recruitment, it puts at risk real efforts that exist now around diversifying the workplace and providing for neurodiverse people and other applicants. When you add the layer of biometric technology on top of it, people with disability, non-conforming faces and people of colour are less likely to be recognised by those systems. The EHRC is starting to look at equality in the context of AI and algorithmic fairness, but there is a lot more that can be done.

Charlotte Nichols: You mentioned Amazon there. I know that in the Motherboard leaks it suggested that the union tracking that they were doing was based out of India. I am interested in where, as you said, these are off-the-shelf programs bought in from other countries and jurisdictions that often have different requirements around things like GDPR. Is there a risk that these technologies are not compliant with UK law, and are holding or processing data about the UK workforce in countries that do not have the level of protection that we have now and might want to build in future, to protect from the potentially discriminatory impacts of AI deployment? Anna, you might want to come in here; you were nodding.

Anna Thomas: I thought it was a very good question. That is a risk, and we have found it very specifically in the context of the auditing tools with regard to US assumptions in particular. Unpacking that a bit, there is a real risk that the assessments and goals are based on an assumption of what a good employee is, based on current patterns. Similarly, the AI tools themselves are going to be trained on data that reflects past patterns of choice and allocation of resource, which can be problematic.

Going back to your former comment, it is also right that discrimination law aims to protect historically disadvantaged traits, but the way they move the bias means the range of traits is far greater in number and complexity than a traditional list of protected characteristics. The power of AI to discover novel correlations and make predictions that we otherwise could not is increasing, and that has significant implications.

I very much agree with what Carly said. It is great that the Equality and Human Rights Commission has AI in its strategy and is looking at hiring. It probably needs more resource to do that and connection with the other regulators, including the Digital Regulation Cooperation Forum.

Tania Bowers: The tech partners and members I have spoken to who have developed or are developing AI machine-learning programs all focus on the huge importance of stripping out personal data, with a muchheightened dependence on candidate skills, category of role and industry. This could be stripping out from geographical location to school to employer. It is incredibly important to have very high-quality, clean data that you understand and control, stripped of personal data, so that it is enabled to learn patterns and to create rules.

Coming back to your point about our members having to make decisions about which technology to invest in or to use if they are not in the position—which only a few companies are—of developing their own, it is very important to look to have something in the UK, but to also work in alignment with international movements in this area.

For example, one of our tech member partners is joining a programme in Singapore, where APSCO operates, called AI Verify. Other pilot members include Google, Meta and Microsoft. That is initiated by the Singapore Government. They are intending to initiate a standard playbook and intend to evolve into a governing body with an accreditation scheme.

Elsewhere, our members will have to be very aware of initiatives in the EU. Other members referred to New York City Council’s recent law at the end of last year on AI. What is critical to have trust and confidence is this general framework, so that they can use these tools.

Charlotte Nichols: I am very concerned, given how much time it took to fight for the blacklisting regulations and so on that we have in the UK, that we do not end up with essentially a new AI blacklist, which is even more difficult for workers to fight against or get themselves off.

Q120       Mark Pawsey: I wanted to follow up on the discussion about the kind of jobs that are going to be affected by this transition. Somebody listening in at the previous session might have been excused for thinking that the only area where it has an impact is in warehousing and distribution. We have heard from Tanya about the role in recruitment. There are repetitive tasks that can be taken away from workers to improve the quality of their work, but is it purely those lowerskilled activities that AI can help with or is there a way in which this could be a tool to assist managers in their decision-making, for example? Anna, which jobs are going to be affected?

Anna Thomas: It is right that AI and automation does not only affect low-skilled workers. We also need to remember that it does not only displace jobs and tasks; it also creates new jobs, changes demand, and has a range of other impacts on the nature, conditions and, importantly, quality of work.

We are in the course of the Pissarides review on the future of work and wellbeing, supported by the Nuffield Foundation, which is looking at this in great detail, not just at macroeconomic level but at nuts too, and perhaps more importantly combining it with local firm and individual perspectives, to get a more holistic picture of how it is affecting groups of workers, occupations and sectors.

At a very high level I can say now that the impacts, both positive and negative, are very uneven across skill, occupation sectors and areas. As a general rule, lowerskilled and poorerpaid occupations are more vulnerable.

Q121       Mark Pawsey: When you say vulnerable, what do you mean?

Anna Thomas: Vulnerable to automation.

Q122       Mark Pawsey: Why are they vulnerable? Vulnerable indicates a bad thing. Why is it a bad thing?

Anna Thomas: It may not be. There are potential positive impacts.

Q123       Mark Pawsey: So why use the word vulnerable?

Anna Thomas: I meant it in the context of risk to automation. It is right that there is a potential to create good jobs and to have positive impacts on conditions and quality work for a number of people, but at the moment we are not seeing that happening. There is no trend towards the creation of goodquality jobs across the country or across sectors.

Q124       Mark Pawsey: Laurence, is AI always to the benefit of the employer and with no benefits to the worker?

Laurence Turner: First of all, thank you for inviting us to give evidence on this important subject. In terms of risk, if I might go back briefly to the previous question, I am not sure that lowerskilled to higher-skilled is the most helpful way of looking at the impacts of AI. We represent close to 500,000 workers across a very diverse range of industries, but a large share of those workers are in occupations that have been traditionally classed as lowerskilled. Those classifications change very slowly. They often represent quite old assumptions about the type of jobs that are being done.

When we look at occupations represented by GMB such as care workers and teaching assistants, where there has been hope expressed from some quarters that these roles might be replaced by AI and other forms of automation, these are predominantly low-paid women workers. We would challenge anyone to describe what they do as being low-skilled.

When we look at some of the other sectors such as finance and medicine, AI automation is here now in those roles, whereas in occupations such as drivers it is not so long since there were dramatic predictions being made that those roles were going to be rendered obsolete by driverless vehicle technology. Now, five years on, we understand more of the limitations of AI in those realworld deployment contexts. I personally think it is helpful to look at those roles that involve repetitive tasks and pattern recognition. That will change. It is a hazardous occupation to make predictions about where technology will be in 10 years’ time, but it is going to have an effect.

Q125       Mark Pawsey: Is the impact of AI being felt equally across all workforces and sectors, or are there some areas where there is increasing dominance? To what extent does it make life easier? You just spoke about distribution. We are some distance away from driverless vehicles, but every driver has a sat nav in their vehicle in a way that did not happen when I was involved in a distribution business 20 years ago, when we had to still rely on map reading. Sat nav and the ability to link between one drop and another makes life a lot easier for the workers. Are there not areas where adoption of these technologies is making for a more pleasant workplace?

Laurence Turner: We argue that technology is usually neutral. This point was made by the witnesses from Prospect and Zoom in the previous panel. It is how it is implemented that matters. We see examples of new technology being introduced such as heavy lifting devices.

Q126       Mark Pawsey: You said neutral, but surely the elimination of repetitive tasks in a warehouse is beneficial to the workforce.

Laurence Turner: I am not sure I necessarily agree.

Q127       Mark Pawsey: You want to retain old, repetitive tasks; is that what you are saying?

Laurence Turner: If I might respond to a point made by the witness from Amazon, it was claimed there that the introduction of automation was reducing musculoskeletal injuries and other injuries associated with high intensity of work, but, at the same time as one form of the automation has been introduced in terms of those heavy lifting duties, our members report that there has been an increase in productivity targets of the intensity of work that is algorithmically led.

Employers, I would suggest, have a choice on how this is introduced, but the growing disparity is also around knowledge and understanding of these systems. Good implementation of AI involves real expertise and a specific context. A lot of anxiety has been expressed today about offtheshelf systems being misapplied. If workers do not have the ability to challenge the outcomes of models, particularly those that have what is often referred to as a black box effect

Q128       Mark Pawsey: Are you not making a case for employers to forget about all of this and go back to the bad old ways of the Industrial Revolution?

Laurence Turner: We would never make that case. What we would say is that implementation of technology, whether that is AI or other forms of technological innovation, will lead to the best outcomes when it is based on negotiation and feedback between workers and management. In the case of AI, there is often not a good level of understanding of these models.

Q129       Mark Pawsey: Would you prefer not to have the introduction of AI in those circumstances?

Laurence Turner: We would not want to see AI being implemented badly in a context that is not fitted for that particular workplace. If I could give a specific example from some of our members in the utilities sector, in one particular employer they were fitted with a new driving system—the sort of system you referred to earlier—which provided the engineers with a series of instructions on how to most efficiently complete their rounds.

The instructions coming out of this system, our members reported, were perverse. When they did not use those instructions they were threatened with disciplinary action. When they then made a point of following it to the letter it did serious damage to a vehicle. That was just bad AI, so there just needs to be a bit of scepticism.

Mark Pawsey: We agree on the need for implementation of good AI.

Laurence Turner: Yes.

Q130       Andy McDonald: If I could stay with Mr Turner, you talked about repetitive strain injuries and the like in that particular instance, but you also did a report not that long ago about the perception of workers, saying that they felt surveillance technologies had a detrimental impact on their mental health and wellbeing. Could you say a little more about the findings of your report?

Laurence Turner: This report was based on a randomised sample of GMB members. It got just over 1,500 responses and 32% of members reported that surveillance at work was having a negative impact on their mental health and sense of wellbeing while they were at work.

It is important to say that this was a combination of many forms of surveillance, including traditional CCTV and use of telematic data. There were some settings where workers, particularly in occupations whether there is risk of assault or difficult interactions sometimes with members of the public, said that there was some value in having a record of what was happening. These are usually in cases where the implementation had been negotiated with the union.

When we did a breakdown for those workers who knew that they were subject to algorithmic working or processes in a workplace, the levels of negative mental health effects was much higher. It was close to 50%. This comes back to the point around a lack of understanding of how people’s data has been used. Only 28% of our members say that they have a clear understanding of how the data generated and collected on them is used by their employer. There are also some very specific cases of surveillance that will have negative mental health impacts.

It was interesting to hear the earlier discussion about Amazon. It was reported two years ago that Amazon had developed specific tools to monitor the metadata from closed Facebook groups, which were used by workers, including Flex drivers, to both discuss issues between themselves about work at Amazon but also to organise. Those tools allowed Amazon to monitor activity within those groups. Of course, when that became public there was a feeling of real anxiety amongst many of our members, because they had placed trust in that platform which then felt violated by the actions of their employer.

There is good research on the sense of betrayal that workers can feel when it becomes apparent that the employer has been monitoring them covertly. This is what our members too often report; they will be called out into a disciplinary and presented with a set of numbers or metrics that they were not aware was being collected on them and they do not feel confident in challenging.

Q131       Andy McDonald: That is an interesting development in that level of surveillance where people are not aware of that happening. Are there other instances that you have come across in the workplace, not just about people’s activity on Facebook, but are people always aware of the surveillance that is going on in the workplace? You have had a survey. How do you deal with the fact that they may not have even been aware that they were subject to surveillance?

Laurence Turner: It is a good point. Only 4% of our members said that they knew for a fact that their employer was using algorithmic processes  in the context of their workplace. You might feel that is an improbably low number, but it reflects that lack of a culture of consultation around the implementation of these new systems. Perhaps it is not always realised that some of the data that has been collected through more traditional forms, such as recording of calls and call centres, could be used by these systems.

I would suggest there is a big job of work for employers, trade unions and trusted thirdsector organisations to make the development of these systems explicable and understood. Through that, we will get better satisfaction at work and all the good employment outcomes we associate with that, but we will also get the development of better systems.

Q132       Andy McDonald: Yes, in a more transparent environment. We want to see people continue to innovate and grow, but should there be a statutory duty to consult with trade unions in the deployment of AI in the workplace? How might that look? How would you answer the charge that that may cause detrimental impacts in terms of stifling innovation?

Laurence Turner: We are working as a trade union to break down those barriers through non-statutory recognition agreements, including through our groundbreaking agreement with technology companies such as Uber. There is not a level playing field at the moment. What we hear from employers, particularly smaller ones, is a lack of confidence in the regulatory environment.

Clarity would be welcomed by many in terms of deployment of these technologies, so we do support that statutory right to consultation. There would be interesting challenges about how that would be defined and where the threshold should be for consultation. It probably would not be sensible to have it when a routine software update was rolled out.

Andy McDonald: Presumably if there was a greater penetration of sectoral collective bargaining, that would be the framework within which you could pursue those concerns, with or without a specific statutory duty.

Laurence Turner: We strongly agree with that. A point we would also make is that a right to consultation on its own would probably be insufficient. It needs to be accompanied by some of the more traditional trade union policy demands, such as the right to access to workplaces. For some platform employers we work with, the line between the traditional physical workplace and an app or platform as a workplace has become blurred. We would suggest that the two dovetail.

Q133       Andy McDonald: The right to switch off, presumably, is part of that.

Laurence Turner: Yes, quite so.

Carly Kind: Last year, Germany adopted the Works Council Modernisation Act, which enabled workers’ councils the right not only to be consulted but to have a technical expert advise them on the technical implications of the system, which seems like it might be an important aspect in rebalancing the power there.

Can I just make a related comment? It is very important in this conversation to break apart the term “AI”, because we are using it as a catchall to refer to a great many systems. Some of those are much more benign than others. Even in this conversation we have spoken about recruitment systems, about worker surveillance and performance management, and about job augmentation or replacement. Those three buckets of AI systems have different implications, and perhaps at some level the obligation should be around transparency, whereas at other levels it is around consultation or empowering. Distinguishing between those might help the conversation.

Q134       Charlotte Nichols: Something that we have spoken a bit about in the previous panel has also been touched on today, particularly in Mr Turner’s answers to Mark referencing the computing systems that some drivers have used. AI advocates promise increased productivity, but how confident can we be that AI actually delivers this? In the previous panel, they were talking about the human leadership element, but we are potentially missing the part about a human sense check on the shop floor. I am concerned about ending up in a situation where you have robots acting as gangmasters, essentially, within traditional workplaces.

Mr Turner, how can we have AI delivering the sorts of productivity gains that employers want and that our economy wants while still making sure that they are realistic, sustainable and based on human interaction?

Laurence Turner: We draw a distinction between AI models and completely endorse the comments made before about the need for clarity about exactly what kind of models we are talking about. Some are designed to deliver goods or services and some are sold to employers as a management tool. We know that employers across all sectors are looking for ways out of the UK’s productivity puzzle. AI has been presented as a possible solution to that, but there are hard limits to some of its applications. We talked about the aspirations around driverless cars that actually have not come to pass.

In terms of its use as an HR tool, we know that current machinelearning models are particularly weak when it comes to predicting social outcomes, including in an employment context, such as training outcomes or the potential for job layoffs. In some of the models recently reviewed the predictive power is under 5%. These are not models that should be treated as having an inherent value.

Quite a lot of work has been done on the development of pseudoscientific AI, sometimes called snake oil AI. As we have said before, there is a danger of AI being deployed that is not trained on a localised dataset or on a goodquality dataset, and then is not necessarily even understood by the managers who are using it. We only need to look to the Post Office Horizon scandal for an example of what can happen when a computer system is seen as unchallengeable or too big to fail. There is a danger of AI becoming a new form of modern Taylorism, but one where there is a greater understanding gap, and also of algorithms being used for short-term productivity gains to the detriment of the employer in the long run.

It was interesting to hear the previous witness from Amazon. It was quite an extraordinary set of evidence that does not reflect what our members are telling us about what happens in those warehouses. In that particular employer we have seen a short-term focus on productivity targets—the so-called pick rate—that has then contributed to the longterm recruitment, retention and worker dissatisfaction problems in that company, which are really coming home to bite now in a very tight labour market.

Anna Thomas: It is worth remembering that good work and wellbeing is associated with higher levels of engagement, motivation, innovation and productivity. Work is ongoing, but we think that, if investment, design, procurement and deployment of the AI systems has these desired outcomes in mind, the outcomes for everybody across the board are likely to be better.

Leaping back to a previous example and the intensification in the Amazon warehouse, if the aim was not solely to increase the number of bags that somebody had to pack within a minute, but it was also done with a more holistic understanding about the impacts on people, their wellbeing, dignity, autonomy, participation and all those factors that make up good work, successful outcomes and sustainable productivity are more likely.

Although our work is ongoing, initial results suggest that a higher level of information sharing, collaboration and a partnership approach are likely to lead to better outcomes. We know, for example—and this is new from our review—that AI investments have been shooting up, and went up in the period between 2015 to 2022. Funding rounds have increased dramatically since 2019, from £2.5 million to £12.6 million, but how that plays out in terms of dissemination, use and impact will depend on the decisions that are made and what the goals are.

Carly Kind: The research is inconclusive about whether AI does lead to more productivity in the market. It is not there yet. In fact, there is a paradox—it is Solow’s paradox from the 1980s—which said that the computer age is visible everywhere except in the productivity statistics. That still holds around AI, so it is worth keeping that scepticism in mind.

Q135       Charlotte Nichols: On scepticism, to come back to the comments you made earlier, Ms Kind, about the need to perhaps have greater specificity when we talk about AI in Parliament around the different buckets that these technologies fall into and the different ways that we need to approach them, I just wanted to drill down a little further into that with you.

When we are looking at our legislative framework and how we deal with AI at the moment, where do you think the gaps are, both in where the law is and in our approaches? For example, I often think that we have things siloed within certain Government Departments, and perhaps it is the wrong regulator looking at a certain issue. If you were going to map out a framework that allowed this stuff to be legislated for properly, to have the right safeguards in place and to have the right people and Departments scrutinising it, how would you have those different pots mapped?

Carly Kind: It is a great question. We do a lot of work with members of the public, long-form public deliberation and engagement. What we hear from everybody when we ask about levels of trust and comfort in AI and other technologies is that everything is context dependent. Whereas people feel relatively comfortable with facial recognition in airports to process their passport checks, they feel much less comfortable with facial recognition in supermarkets monitoring potential shoplifters, or in schools monitoring children’s attendance. There are vast levels of difference, so context is everything. Building trust in different contexts is very important.

When it comes to AI governance, which is the overarching question, we have to think about an ecosystem approach that brings together selfregulatory behaviour, regulation and the implementation by regulators and other independent bodies. The proposal on the table from Government at the moment, as enshrined in the policy statement that was released over the summer, imagines six high-level principles—as someone in the previous panel mentioned—that are then entirely devolved to regulators to implement.

That is a challenging proposal, given that we have more than 100 regulators, some of which have overlapping domains and some of whose domains leave big gaps. Some of those big gaps will be felt first and foremost by workers, in fact. Part of the problem there is that the principles articulated included fairness, and that enabled regulators to each develop their own interpretation of fairness and apply it in their own particular context. Whereas context is important, as are sectoral definitions and interpretations, there needs to be an overarching framework. There needs to be consistency, and much more capacity and guidance provided to regulators for implementing any ultimate regulatory framework.

We hear a lot from people about their desire to have independent oversight and independent mechanisms for checking compliance with regulators. It is fair to say that people generally feel a real lack of agency and power when it comes to the digital domain across the board. They want to see more regulation, but regulation that is implemented and overseen by independent regulators.

The question has come up in this place and others many times about whether we need some kind of independent digital authority. There is still a question mark about that, but it is clear that existing regulatory functions like the ICO simply do not have the capacity to cope with the many sets of implementation of AI across every domain and sector. I am sure others have other things to say, so I will stop there.

Charlotte Nichols: No, by all means carry on. This is what we are here for.

Carly Kind: I will add one more thing, then. We must not forget the importance of data protection legislation, as it underscores and cuts across AI entirely. We do not have AI unless we have data. Now is the wrong time to be trying to water down data protection legislation, which is indeed what is on the table. There are really important provisions in the current Data Protection Act that relate very clearly to AI—article 22, for example, which is around automated decision-making. Recruitment by algorithm would fall within that definition.

The Data Protection and Digital Information Bill on the table at the moment changes that provision slightly. It does not eradicate the meaningful benefits that are there, but it could be stronger. I would encourage looking across data protection legislation to understand how it fits with AI and how it is going to create a protective environment, albeit one that makes data available for use, particularly in serving public benefit and social value. A really holistic look at data protection as it relates to AI is quite necessary. Trying to confine it to a small compliance issue is not the way to think about data at this stage.

Q136       Charlotte Nichols: Just because we obviously have a lot of discussion about it this week in Parliament, where do you think the Online Safety Bill fits into that, or do you think that they are very separate and distinct forms of digital regulation?

Carly Kind: I do not feel I have the expertise in online safety to speak to that. I do not know if others do. Perhaps that is an indication that we do consider it quite a separate domain of regulation.

Tania Bowers: In terms of job boards and platforms such as LinkedIn, there is clearly an overlap. Online recruitment is here and it is only going to grow, so there is certainly an overlap.

Laurence Turner: Just going back to a point about legal gaps, the TUC AI working group, which GMB is represented on, has done some really good work on this, which we would endorse. The point that was made around the need for existing legislation and body of law to be brought up to date is really important. We have a concern about the 2010 blacklisting regulations, which assume the existence of a digital or physical list. When the consulting association that ran the blacklisting operation in the construction industry was raided in 2009, its system was based on fax machines and index cards. It seems almost impossibly old fashioned today.

I would suggest the blacklisting of the future will not look like that, but it is not clear at all to us that the existing regulations would be sufficient in the case of a predictive model that was used for blacklisting purposes. We need to think about models. It is sometimes assumed that AI might inadvertently discriminate. We need to also think about cases where AI might be designed explicitly to discriminate.

Some important points were made earlier about equalities. There is case law in relation to use of psychometric testing, for example, in the Civil Service, and its potential to discriminate against neurodiverse workers. Again, we should not be waiting for cases to be brought so that existing case law from a different context can be brought up to date. There should be regulations in place to anticipate the problems that are going to arise in the labour market as a result of wider use of AI.

Anna Thomas: We very strongly agree with what Carly said about the need for a preemptive and overarching framework for accountability. You see that really sharply in the workplace. In our gap analysis in the workplace of how the GDPR sits and where it does not sit with equality law, there are gaps in between.

Picking up on the example of the GDPR, that is one way you can think about impacts and rights in the workplace, but it is not all about privacy. There are limits to the extent that data protection can be a window into thinking about other fundamental rights. Workers are not only data subjects. AI also looks at future employees who are not even data subjects. These are really not things that can be dealt with solely through a sector or regulator approach, although there is work to be done on sharpening the regulatory remits.

Chair: I feel like we have only just scratched the surface today, and unfortunately we have timed out. If there is anything that you want to add in further detail that you have not already submitted in written evidence, please do so, because we are going to have to translate what we have heard today into thinking about priority recommendations for Ministers. The same offer goes out to anyone watching. If you have not submitted written evidence on these issues but you would like to do so, please do write to us. For the purposes of today I am afraid we have timed out. Thank you to all four of you for your contributions. We will now bring the session to an end.