Written Evidence Submitted by Dr Philippa Collins and Dr Joe Atkinson

(GAI0074)

 

 

 

Summary

 

This submission provides evidence on the current options for challenging the use of AI to manage people at work, and on how AI should be regulated in this context.

 

We set out four legal mechanisms that can be used to scrutinise, challenge or influence the deployment of AI tools in this context: data rights, discrimination law, collective bargaining via unions and health and safety law (see [7]-[20]). We propose several ways to improve these existing frameworks: strengthening of transparency obligations owed to individuals and worker representatives, enhancing the scope of consultation and negotiation obligations, and specific improvements to the requirement to conduct impact assessments (see [24]-[33]).

 

In the final section we outline five proposals that would lay the foundations for ensuring fair treatment for working people who are subject to management by AI:

 

  1. We recommend detaching rights related to AI usage from the current categories of employment status and providing these safeguards to everyone subject to management by AI (see [34]-[36]).
  2. A specialist regulatory body should be created, with powers to monitor, audit, inspect the use of AI in workplaces, along with a licensing/certification regime for AI-based management packages (see [37]-[38]).
  3. Certain AI-based tools should be prohibited as automatically unfair uses of technology in the context of work, either entirely or when used for specific purposes (see [39]-[42]).
  4. We suggest the imposition of joint liability for any rights infringements between companies developing AI tools and employing organisations that implement them (see [43]-[46]). Developers would thereby be incentivised to build compliance with the UK’s legal frameworks into their AI systems.
  5. An AI and Workplace Technology Representative should be mandatory where AI is in use (see [47]-[48]). The Representative should be involved in the creation and audit of impact assessments, monitoring the use of AI systems, consulted about new systems or system changes and entitled to training and the assistance of an expert.

 

We hope the Committee finds this evidence helpful for their enquiry, and are happy to answer any questions or provide further information if this would be of assistance.


Scope of Evidence

 

  1. This submission will address, in the context of AI managing people at work, the following questions formulated by the House of Commons Science and Technology Select Committee:

 

        Are current options for challenging the use of AI adequate and, if not, how can they be improved?

        How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

As specialists in the field of labour law and human rights we focus on the legal framework that regulates the use of AI by employers (broadly defined). We reference to other jurisdictions where appropriate. In this submission, we suggest ways in which the current law can be improved. We also propose new avenues that would guard against unfair treatment and render the protection of individuals who are subject to AI tools in their working lives more effective.
 

  1. The focus of this submission is on the regulation of AI in the workplace. In recent years, artificial intelligence has been combined with an increasing capacity for employers to monitor individuals on a real-time basis to transform how individuals at work are managed. Tasks that would previously have been performed by a human (with the use of some IT support) can now be entirely or mostly automated with the use of AI: from recruitment through platforms that scan documents for key phrases and analyse video recordings of candidates (Briône, 2020), basic management functions such as establishing shift patterns and allocating work (TUC, 2020), to evaluation of work and ultimately the termination of employment relationships (Collins, 2021). The rise of these algorithmic management practices presents urgent questions regarding the regulation of work (Atkinson, 2021).

 

  1. The Committee will no doubt already be familiar with the companies that have forged the way in this arena of management by AI (e.g. Amazon, Uber, Deliveroo) but these practices are increasingly spreading to other sectors (Wood, 2021). In addition, many technology providers now offer software packages that perform managerial tasks on behalf of employers. For example:

 

        Cogito[1] is used in call centres to evaluate and provide feedback in real time to workers.

        Percolata[2] combines data from sensors measuring in-store customer traffic with information on the weather, past sales patterns, etc to “optimise” scheduling of staff in 15 minute windows which can be adjusted by managers in real time through an app.

        Enaiable[3] produces individual performance scores for workers based on data captured from their use of messaging, email, phone and other internal work systems, which can be used to rank workers and make recommendations on how the businesses workflow can be automated. 

 

The Current Legal Framework

 

  1. The default legal position is that employers have a broad prerogative to introduce new technologies into the workplace, provided that this does not contravene some other duty of the employer or a right of their employees/workers. This position contrasts with other jurisdictions that require employee consent or a collective agreement to be in place before algorithmic management processes or surveillance measures are introduced (e.g. France and Italy).

 

  1. We will not discuss every legal constraint placed on employers in the context of introducing AI into the workplace. For example, we do not consider in detail the general common law duties of employers, such as their implied contractual duty to maintain trust and confidence with their employees, or their duties to provide a safe working environment. It is possible, however, that these might be used to challenge the use of AI at work: we would argue that non-consensual monitoring and evaluation of employees by AI, for instance, or the introduction of AI tools that lead to the intensification of work processes, could breach these respective duties. These general duties should therefore inform employer decisions about the introduction and operation of AI into the workplace.

 

  1. But as employers’ general duties are open-textured norms, it is difficult to specify clearly in advance what they demand of an employer. They are therefore a less useful guide as to how AI should be used in workplaces than more precise legislation. As a result, employers are more likely to be guided by the more concrete frameworks they are subject to, such as obligations:

 

        To handle workers’ personal data in accordance with the Data Protection Act 2018/UK General Data Protection Regulation;

        To pay national minimum wage and to respect a worker’s right to paid annual leave;

        To treat staff equally and without discrimination, and

        To not dismiss them unfairly or in breach of their contract.

        To consult with unions and/or health and safety representatives as required by health and safety legislation or an agreement with a union.

 

Mechanisms for Challenging and Scrutinising the Use of AI at Work

 

  1. Workers have already taken up challenges directed at some of these points. Uber drivers, for example, have sought the legal status of “workersto claim rights to national minimum wage and paid annual leave (Uber BV and Others v Aslam and Others [2021] UKSC 5). Other litigation against Uber has been commenced more recently, which relies upon discrimination law to challenge Uber’s implementation of facial recognition software as a pre-requisite to accessing the platform (Pa Edrissa Manjang v Uber Eats and Others, Case #3206212/2021). British workers have also appeared in other jurisdictions, notably the Netherlands, to challenge data protection practices of platforms like Uber and Ola. Some cases focus on transparency and data subject access requests, whilst others target automated decision-making about price-setting, ratings, routes and deactivation (termination) of work.[4]

 

Here we focus on four avenues for challenging or scrutinising decisions made by AI. 

 

Data Protection

  1. There are three aspects of data protection that we consider of particular relevance to AI and work: the right to make a data subject access request (DSAR) under Art.15 UKGDPR; the right not to be subject to a decision based solely on automated processing under Art.22 UKGDPR and s.14 DPA2018, and the requirement under Art.35 UKGDPR to conduct impact assessments where data processing poses high risks to the rights of data subjects. Art.15 sets out the right of data subjects to obtain their own data that is being processed, and other information including the purpose of process, categories of data concerned, information about the source of data, and ‘the existence of automated decision-making, including profiling … at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.’

 

  1. Art.22 operates where decisions are based solely on automated process, including profiling, which produces legal effects concerning the data subject or similarly substantially affects them. AI that is used to analyse performance at work is an example of “profiling”. Disciplinary decisions, for example to suspend access to work or to terminate a contract, have legal or similar effects. Other matters that can be fully automated – e.g. allocation of work between workers, shift patterns, instructions, work pace – may not reach this threshold and therefore fall outside the scope of Art.22, despite their importance to the daily lives of workers.

 

  1. Under Art.22 and s.14 DPA, there are limited purposes for which ADM is permitted: either it is necessary to perform a contract between the parties, the data subject has given explicit consent, or the decision is authorised by law. Art.22 provides the right not to be subject to ADM outside of these purposes. In the employment context, with its inherent inequality of bargaining power, we must be sceptical of reliance on a data subject’s consent, and automated decision-making is not necessary to perform a wage-work bargaining. Section 14 DPA sets out specific requirements where the processing is ‘authorised by law’, such as a decision to terminate a contract of employment that is authorised by the common law and by statute. In these cases, the data controllers must: (1) notify a data subject as soon as reasonably practicable that automated decision-making has taken place; (2) permit the data subject to request that the decision be reconsidered or that the data controller take a decision not based wholly on automated processing; (3) consider any request made, including evidence provided by the data subject, and inform the data subject of the outcome and the steps take to comply with their request. Art.22 and s.14 may be the subject of reform in future (see DDCMS, 2022). 

 

  1. Controllers have an obligation to conduct data protection impact assessments (DPIA) where processing poses a high risk to the rights/freedoms of data subjects. As with Art.22, only some aspects of AI-usage in work will be caught by this obligation. The assessment must include, amongst other content, an assessment of the risks to the rights and freedoms of data subjects and outline the measures envisaged to address the risks. A broad interpretation of the phrase ‘rights and freedoms’ is supported by the recommendations of the Article 29 Data Protection Working Party. We argue that it should be taken as referring to at least the rights contained in the European Social Charter and the European Convention on Human Rights and will discuss this further below.

 

Unions and Collective Bargaining

  1. One way that employees and workers have already begun to challenge and scrutinise the ways AI is being integrated into work processes is through their unions and the legal frameworks that support and regulate collective bargaining (see Atkinson and Collins, forthcoming). Broadly, if a union wishes to reach a collective agreement with an employer they must achieve three things. First, a union must build up support in the “bargaining unit” that they seek to represent. Next, a union will seek “recognition” - i.e. an agreement between the union and the relevant employer to negotiate a collective agreement. Finally, the union and the employer will negotiate and, hopefully, reach an agreement that regulates the relevant terms and conditions of service. During the second and third stages the union may, if it has sufficient support from its members, take industrial action in response to an employer’s reticence to recognise or negotiate with the union.

 

  1. Trade union recognition, collective bargaining and industrial action are governed by the Trade Union and Labour Relations (Consolidation) Act 1992. It is important to note that a union must consist of ‘workers’, a legal category defined as an employee or any other individual who works ‘under any other contract whereby he undertakes to do or perform personally any work or services for another party to the contract who is not a professional client of his’ (s.296). This definition therefore excludes individuals who do not always personally perform the work themselves (e.g. where they can choose to send a colleague in their place) and those who run small enterprises or personal services companies. In addition, as the legislation only facilitates collective bargaining between workers and their direct employer, agency workers are unable to use it to negotiate with the end user company to which they supply their services.

 

  1. The scope of matters that unions can seek to negotiate and strike about includes any terms and conditions of employment (broadly defined), matters of discipline and engagement/termination/suspension of employment and the allocation of work (TULRCA1992, s.178 & s.244). As we will see below, it is therefore possible for unions and employers to negotiate and reach agreements on the collection of data and use of AI in the workplace. What is very limited, however, is the union’s ability to apply pressure on employers to reach such agreements by taking industrial action. The legal regime that regulates strike is highly restrictive and contains many technical hurdles that unions must overcome. One significant limitation, in the context of AI tools developed by third parties, is that the ‘trade dispute’ must be with one’s employer. Workers would be limited to striking against their employer’s decision to adopt an AI management tool, but could not use strike to challenge the third party’s design choices.

 

  1. Some unions and worker representatives have successfully sought and reached agreements that touch on the use of AI in work processes. One example is the agreement reached between the GMB union and the company Hermes, which covers some of the AI processes used by the company to manage their workforce of delivery drivers. The agreement required the companies’ automated payment system to be reprogrammed to ensure workers receive at least the minimum wage and are automatically paid any bonuses they have earned rather than having to retrospectively claim for these. The agreement also provides unions with the opportunity to conduct health and safety audits following incidents, thereby allowing them to flag up instances of algorithmic management practices that are causing safety issues, as well as introducing a process for workers to challenge decisions made by technology (Rolf et al, 2022).  Another relevant agreement is that concluded between the Communication Workers Union and the Royal Mail Group.[5] The section titled ‘Technology’ contains safeguards for existing pay and hours if new processes are introduced and guarantees that human decision-making will continue to be central to operations. These examples show that, where employers are willing to recognise and engage with unions, it is possible to use collective bargaining to challenge and shape how AI is used in the workplace and to ensure compliance with other legal standards (e.g. the minimum wage).

 

Discrimination Law

  1. The Equality Act 2010 can also be used to challenge and scrutinise employers’ use of AI in the workplace. Where AI relies on protected characteristics such as race or sex in its decision-making processes, any individual treated less favourably as a result will have a claim for direct discrimination (Equality Act 2010, s.13). Direct discrimination claims will also be available where AI decisions are based on data points that act as precise proxies for protected characteristics. Alternatively, the AI tool may be challenged as having a discriminatory impact upon a group protected by the Equality Act. Under s.19, AI tools are indirectly discriminatory, and therefore unlawful, where they have the effect of putting groups with a protected characteristic at a ‘particular disadvantage’ which cannot be justified. A ‘particular disadvantage’ may exist, for instance, where members of the protected group are statistically overrepresented in the class of people detrimentally impacted by AI decisions or are subject to a higher rate of errors by the system. In these situations, an employer will be liable unless they are able to justify the use of the AI as a necessary and proportionate means of achieving a legitimate aim. In such cases, the Employment Tribunal must closely scrutinise and assess whether the benefits of the tool are sufficient to justify its use.
  2. The Equality Act requires claimants to show enough evidence of discrimination from which discrimination can be inferred by the tribunal (s.136). The employer must then demonstrate that they did not discriminate unlawfully. Individuals may believe that the use of an AI system has resulted in unlawful discrimination, but will often struggle to access or understand the internal functioning of the AI in order to present this to the tribunal. In some cases, empirical evidence of biased decision-making by the same (or similar) tool could provide the tribunal with enough evidence to infer discrimination. For example, where the AI technology has been shown to be discriminatory in another context (such as facial recognition tools) or the training data contains historic bias or inequalities, this should be sufficient to meet the initial burden of proof. Nevertheless, this obligation to produce sufficient evidence of discrimination is likely to be a significant stumbling block in practice for many claimants wishing to prove that they have been subject to discriminatory AI decision-making.

 

Health and Safety

  1. The use of AI in the workplace can undermine the existence of a healthy and safe working environment. Constant workplace monitoring using AI tools can lead to stress, anxiety and burn-out, and the use of AI at work frequently contributes to the intensification of workloads which is also associated with occupational health risks (Todolí-Signes, 2021). The use of AI-driven tools in Amazon warehouses, for instance, has led to these workplaces having almost double the industry average number of workplace injuries (Strategic Organizing Center, 2022).

 

  1. Given these risks, an employers’ duties under health and safety law (Health and Safety at Work etc. Act 1974; The Management of Health and Safety at Work Regulations 1999) may constrain the use of AI. All employers have a duty to employees and others affected by their undertaking to run their business in a way that does not expose individuals to health or safety risks so far as is reasonably practicable (HSWA 1974, regs. 2 & 3). In addition, employers must make an assessment of the health and safety risks of introducing AI-driven tools where these amount to a significant change in working practices, for the purpose of identifying the measures needed to comply with their duties under health and safety law (TMHSWR 1999, reg 3).

 

  1. In addition, employers may need to consult employees or their representatives over the introduction of AI at work. There is a general duty to consult in respect of any measures that may substantially affect health and safety, as well as a more specific duty to consult on the health and safety implications of introducing new technologies into the workplace (Safety Representatives and Safety Committees Regulations 1977; Health and Safety (Consultation with Employees) Regulations 1996). Where there is a recognised trade union, consultation must take place with an appointed safety representative, who employers are required to consult with to develop and check the effectiveness of health and safety measures (Health and Safety at Work etc Act 1974, s.2(6)). If no union is recognised, employers may consult employees directly or via an elected representative (Health and Safety (Consultation with Employees) Regulations 1996, reg 17).

 

General problems that affect the existing legal mechanisms

 

  1. There are some problems that are common to the mechanisms outlined above and, indeed, other employment law mechanisms that might be used to scrutinise/challenge AI at work, such as the law of unfair dismissal. The first barrier to workers challenging AI is a lack of transparency and understanding about how they are managed and the roles that AI/data/algorithms have in that process. This has been consistently highlighted as a major issue by academics and worker organisations. As the tools and data systems used to manage workers become more complex the pool of individuals with the technical expertise to understand them becomes smaller.

 

  1. The scale of this issue is matched by very patchy awareness amongst the workforce about what rights they hold (either employment rights or data rights) and how these may be used to challenge or scrutinise the use of AI-driven tools at work. When it actually comes to attempting to enforce their rights, either by opening conversations with their employer or by litigation, there are serious access to justice issues. Workers who want to retain employment (particularly given the current economic climate) will be concerned about retaliation from the managers if they do raise concerns. In terms of accessing tribunals and the courts, fees are currently not payable but the process is lengthy, potentially confusing and intimidating, and unequal in terms of the representation that the respective parties can obtain, as legal aid is not available in most cases. Both the Employment Tribunal system and the Information Commissioner’s Office are also under-resourced, which contributes to difficulties accessing effective and timely justice.

 

  1. Workers also face another obstacle in the particular context of AI tools. First, in the event that AI tools have been developed by the employer “in house” and are then used to manage the workforce (e.g. Amazon, Uber, Deliveroo etc), the employer will not be willing to open up their AI mechanisms to scrutiny by workers or unions on the grounds that they are confidential/a trade secret. This reluctance was demonstrated by litigation in Italy. When Deliveroo was challenged for discrimination against trade union members, it preferred to pay the fine than open its management processes to scrutiny by the court. Second, an employer who “buys in” software packages or services from third parties may be unaware of how the AI operates, and only the outcomes that it seeks. The third party company providing the package will be similarly protective of their package, relying on trade secrecy and confidentiality to protect their product. This lack of transparency makes challenging the use of AI in the workplace incredibly difficult.

 

How could the existing mechanisms be improved?

 

Data-related Rights:

  1. We recommend greater emphasis in practice and in regulation upon the provision of information to individuals ahead of the implementation of any AI processes and the commencement of decision-making assisted by AI. The Art.22 model requires controllers to inform data subjects about an automated decision-making process and to permit a challenge, but only after they have been subject to it. The DSAR model requires a data subject to have sufficient understanding of the processing that they can make a meaningful request and be proactive in doing so. Information about any automated processing should be given when data is obtained from the subject (UKGDPR, Art.13), but this does not appear to have had an impact on worker awareness of the kinds of AI tools that they are subject to. More proactive information sharing and transparency is required.

 

  1. For a more proactive model we can look to Art.6 of the EU Proposed Directive on Platform Work (PDPW). If enacted, Art.6 would oblige employers to inform staff about automated monitoring or decision-making systems that are in place. The required information would include the categories of monitoring, the types of decisions being made by technology, the parameters of decision making, and the ‘grounds’ for major decisions, such as to terminate, restrict or suspend a worker from the platform. In addition to this, we believe that the law should expressly require that the information provided must be comprehensible to those subject to the AI packages and that regular updates should be provided. The information should also be shared well in advance to permit workers to seek to influence or shape the use of AI tools, either through their union or other methods (see proposals below).

 

  1. Full and effective compliance with any transparency requirements, ahead of initial implementation or subsequent changes to the AI tool, should be a precondition to the lawful use of AI tools. Concurrently, legal rights to an explanation, to challenge decisions made or aided by AI and to seek human review of those decisions must be maintained.

 

The Expansion and Improvement of Impact Assessments

  1. It was argued above that employers’ obligations under the UKGDPR to undertake impact assessment already requires them to consider and address risks to workers human rights. However, the framework would be improved if the inclusion of human and labour rights in impact assessments was expressly set out by the legislation. This requirement could, for example, reference the rights contained in the European Convention on Human Rights and the European Social Charter.

 

  1. Another significant shortcoming in the current law’s effectiveness is that impact assessments are not publicly accessible, making it impossible for workers, unions and external organisations to audit them. This is problematic as access to and scrutiny of impact assessments is key if they are to be a meaningful regulatory tool. A promising reform in this area would therefore be to introduce an obligation to publish impact assessments and make them available for external audits. One source of inspiration in this respect is New York City’s Bias Audit Law (Local Law 144 of 2021), which requires that a bias audit be undertaken for all AI systems that "substantially assist or replace” managerial decision-making by employers. These audits must be made publicly available within a year of the systems being introduced, and workers or job candidates subject to AI decisions must be notified at least 10 days prior to the system being used.

 

  1. If the extended impact assessments proposed here were made public this would also provide crucial information to workers and their representatives about any disparate impacts of the tool on particular groups. This is important as it would facilitate challenges to the use of AI being brought under the Equality Act 2010 (see [17] above).

 

  1. Connecting to our recommendations below, any impact assessment should be prepared in consultation with specialist representatives of the workforce (see [46] below) and that representative should have the ability to audit compliance with the impact assessment. Overarching oversight should be provided by a specialist regulatory body (see [46] below).

 

Improving Framework for Collective Bargaining/Worker Consultation in AI context

  1. For clarity, decisions regarding the use of AI, employer’s data policies, measures for monitoring workers in and outside of work should be included in the listed matters about which unions and employers can negotiate and unions can take industrial action. Transparency regarding these issues was mentioned in relation to individual workers above, but transparency obligations should extend to any union recognised to represent workers affected by AI or related management processes. These matters should, in addition, be added to the mandatory topics for collective bargaining where a union has won recognition under the procedure set out in TULRCA, Schedule A1.

 

  1. The right to consultation currently found in health and safety legislation should be extended. Rights to information and consultation should exist wherever new technologies are introduced that will have a significant impact on managerial processes or working conditions. These new duties should be coupled with the creation of mandatory workplace representatives for the purposes of consultation about workplace technologies (see [46] below).

 

  1. There are two further provisions contained in the EU’s Proposed Directive on Platform Work which we believe should be adopted:

 

        Under Art.15, platforms are under an obligation to ensure that workers can contact each other and their representatives through the platform’s infrastructure or other effective means, and States must legislate to prevent digital platforms from accessing or monitoring these communications.

        Under Art.9, platforms must consult worker representatives or the workers themselves regarding decisions likely to lead to the introduction of or substantial changes in the use of automated monitoring and decision-making systems and it entitles the representatives to be assisted by an expert.

 

These obligations could apply more broadly to any workplace where technology plays a significant role and would be helpful in ensuring unions and other workplace representatives are well-equipped and able to discharge their functions.

 

Beyond Current Frameworks: Novel Proposals for Regulatory Reform

 

I.           The Need to Detach Digital Rights from Employment Status

 

  1. Scholars and activists have long been dissatisfied by the law’s answer to the question of who is entitled to certain employment rights. Unfair dismissal law is the preserve of “employees” only, whilst “workers” receive the protection of discrimination law, the minimum wage and the right to paid annual leave. These legal categories have already been shown to be too narrow in respect of the regulation of AI at work. Deliveroo riders, for example, have been denied the right to organise and bargain through a union as they fall outside the legal definition of “workers” (Independent Workers Union of Great Britain v Central Arbitration Committee [2021] EWCA Civ 952).

 

  1. Other frameworks take a different approach. Data protection rights attach to all data subjects, human rights to all humans, etc. In the Proposed Directive on Platform Work, some of the rights/obligations (transparency, human monitoring and human review) apply with regard to all persons performing platform work, regardless of their employment status (see Art.10 PDPW). We argue that a functional approach, rather than one centred on any particular employment status, is needed here to ensure that everyone affected by decisions or recommendations made by AI packages receives any rights to fair and decent treatment that are introduced in the UK.

 

  1. We consider these rights to include (at least) human rights and equality rights, data protection rights, rights to a safe and healthy working environment, the consultation rights described above, as well as the benefit of any new measures such as those outlined below. 

 

II.          The Introduction of a Specialist Regulatory Body

 

  1. Specialist governmental oversight of AI is urgently necessary. In the work sphere, there are already difficulties with disjointed regulatory oversight. Securing adequate funding for current and any proposed regulatory bodies is likely to be a significant challenge in the current climate. We argue that a body which benefits from technical, legal and workplace expertise would be best placed to monitor, audit, investigate and guide the use of AI tools in workplaces.

 

  1. The body could also hold responsibility for a certification/licensing regime that would use their expertise to establish that any AI-based packages to be used in the UK comply with the range of legal rights and responsibilities that are in place here. There is much in common between this proposal and the Recommendation 4 made by the APPG on the Future of Work (2021).

 

III.        The Prohibition of Certain AI and Monitoring Tools in Workplaces

 

  1. There are some tools, either already available or currently in development, that are so destructive of the human and labour rights of people at work that they should be regarded as ‘automatically unfair’ uses of technology that must be banned altogether. In such cases, we argue that the implementation of these tools should be prohibited on a non-waivable basis given the inequality of power between providers and performers of work and the economic and social dependence of the latter upon the former. Following this:

 

        The use of subcutaneous implants (microchips) in a person’s body, such as those offered by UK’s BioTeq[6] or Sweden’s BioHax,[7] should be prohibited.

        The use of affect/emotion recognition on those applying for or providing work should be banned (as proposed by the Council of Europe’s Consultative Committee for the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, 2021).

        The use of any biometric data to assess performance or capability during recruitment or at work should be prohibited.

        Explicit prohibition of the use of AI tools for specific purposes is needed. A fuller survey would be necessary to establish the boundaries of this category, but it should at least include monitoring of communications or analysis of data points for the purpose of suppressing union activities and exercises of workers’ freedom of association. We would also query the use of biometric identification as a mode of identification required to access or perform work, given the bias issues currently raised about its use.

 

  1. We recommend that employing undertakings are prohibited from making termination or disciplinary decisions which rely substantially upon the recommendations of AI tools. This prohibition should extend to such decisions that are made for any reason (including misconduct, capability/performance, economic). AI tools may legitimately be used as an “early warning sign” to flag emerging issues.  From this point, however, human interaction is essential to ensure that work is not dehumanised and working relationships are based on true trust and confidence between working people and their managers.

 

  1. Where AI is used to identify problems with an employee’s performance the employing organisations must undertake adequate investigations prior to taking any disciplinary action. They must explain the situation to the individual affected, what the problem is and how the AI tool was used in the process. Employers must hear that person’s response, be understanding of their individual circumstances and provide an opportunity to improve, where appropriate. This process of explanation and engagement may lead to questions about the AI tool itself (e.g. that the expectations it is seeking are unachievable), in which case employers should correct the AI device accordingly.

 

  1. In some sectors of work, the deployment of AI tools to manage staff has already gone too far. We argue that it must be rolled back urgently if we are to ensure decent and fair treatment of people that work in the UK.

 

IV.      Joint Liability between Developers/AI Providers and Employers for Infringements upon Rights

 

  1. In many cases, the use of AI tools introduces a third party, the developer of the tool, into central aspects of the management process. This creates a potential gap in accountability for any infringement of rights that occur as a result of the use of the technology. The individual has a relationship (usually via a contract of some kind) with the employing organisation, but not with the developer. A complex chain of liability would be the only way that a developer could currently be held responsible for their role in an infringement: the individual affected would have to sue their employing organisation, who would in turn seek recompense from the developer for a portion of the damages due.

 

  1. This chain of liability (or the threat of it) is a very weak incentive for developers to take into account the human rights, non-discrimination, data protection, and employment rights of those that are affected by their tools. A joint liability model, as in the UKGDPR, would be far more effective. Under Art.82, a person who has suffered damage as a result of a data breach has the right to compensation from either the data controller or data processor. Where more than one party is responsible for damage caused by processing, each controller or processor is held liable for the entire damage to ensure effective compensation (Art.82(4)). The only way to avoid liability is to prove one defendant had no responsibility for the event that gave rise to the damage.

 

  1. A similar model should be adopted to regulate and attribute responsibility between employers who implement AI packages and their developers/marketers. If there is found to be a breach of an individual’s rights (for example because the manner in which their contract was terminated was unfair or they were not paid the minimum wage), liability would be imposed on both parties. The only way for either party to avoid that liability would be to demonstrate that they took all reasonable steps to prevent the breach from occurring and held no responsibility for the breach. Imposing joint liability would incentivise developers to design packages that comply with the legal framework that they will be used in, rather than focusing solely on optimising work processes. Developers would (at the least) have to demonstrate an understanding of an employer’s obligations in the relevant jurisdiction and show how they took steps to ensure employers met those responsibilities.

 

V.         AI and Workplace Technology Representative

 

  1. Unions are seeking a more significant role in influencing and auditing the use of technology in the workplace and the importance of worker representation has been recognised in draft regulations elsewhere (see Art.9, PDPW). In addition to enhancing bargaining and consultation in this context (see [31]-[33] above), we recommend the introduction of an AI and Workplace Technology Representative.

 

  1. In outline, the proposed AI and Workplace Technology Representative would be:

 

        Mandatory in any work context where AI or algorithms are used to monitor or make decisions in relation to individuals at work.

        Involved in the process of undertaking and auditing impact assessments of AI systems.

        Consulted in advance of any introduction or changes to technologies used in the workplace.

        Entitled to be assisted in their work by an expert (as seen in Art.9 PDPW) and offered specialist training to enable them to perform their role.

        Elected whether or not there is a recognised union in the workplace (as is the case of health and safety representatives).

 

  1. Pragmatically, given the confidentiality/trade secret issues surrounding AI technologies it may be necessary for AI and Workplace Technology Representatives to sign appropriate non-disclosure agreements. We argue that these should be a standard form, drawn up by a regulator, to ensure that any trade secrets are protected but Representatives are still able to exercise their function effectively. For example, there is a distinction between disclosing to the workforce at large the detail of how a system operates and disclosing the outcomes that it seeks to achieve and the data points that it will take into account. Whilst NDAs are controversial, they could have a valuable role in ensuring appropriate consultation with those subject to AI tools whilst protecting genuine trade secrets of the companies that develop and sell the packages.

 

 

 

Bibliography

 

All-Party Parliamentary Group on the Future of Work (2021), The New Frontier: Artificial Intelligence at Work, Final Report, p1-32, online.

 

Atkinson, J (2021) ‘Technology Managing People’: An Urgent Agenda for Labour Law. Industrial Law Journal, 50(2), 324.

 

Atkinson, J and Collins, P (forthcoming) ‘Worker voice and algorithmic management in post-Brexit Britain’. Transfer: European Review of Labour and Research, online.
 

Briône P (2020), My Boss the Algorithm: An Ethical Look at Algorithms in the Workplace, ACAS Policy Paper, online.
 

Collins, P (2021) Automated Dismissal Decisions, Data Protection and The Law of Unfair Dismissal’ UK Labour Law Blog, online.

 

Council of Europe’s Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (2021) Guidelines on Facial Recognition, available at online.
 

DDCMS (2022) ‘Data: a new direction - government response to consultation’, online.

 

Rolf, S, O'Reilly, J, and Meryon, M (2022) ‘Towards privatized social and employment protections in the platform economy? Evidence from the UK courier sector.’ Research Policy 51:104492.

 

Strategic Organizing Center (2022) The Injury Machine. How Amazon’s Production System Hurts Workers.

 

Todolí-Signes, A (2021) ‘Making algorithms safe for workers: occupational risks associated with work managed by artificial intelligence.’ Transfer: European Review of Labour and Research 27:433.

 

Trades Union Congress (TUC) (2020), Technology Managing People: The Worker Experience, online.

 

Wood, AJ (2021), Algorithmic Management: Consequences for Work Organisation and Working Conditions, Report for the European Commission, Joint Research Centre Working Papers Series on Labour, Education and Technology 2021/07, JRC124874, p1-26.

 

(November 2022)


[1] https://cogitocorp.com/.

[2] https://www.percolata.com/

[3] https://enaible.io/

[4] See coverage at http://eulawanalysis.blogspot.com/2021/04/the-ola-uber-judgments-for-first-time.html and https://ukhumanrightsblog.com/2021/03/19/the-providers-of-ride-hailing-apps-and-their-drivers-another-judgment-from-amsterdam/

 

[5] RMG & CWU Key Principles, 2020:  https://www.cwu.org/news/rmg-cwu-key-principles-framework-agreement-the-pathway-to-change/#:~:text=The%20agreement%20recognises%20that%20in,will%20be%20in%20May%202023.

[6] https://www.bioteq.co.uk/

[7] https://biohax.com/