Written Evidence Submitted by TUC




The Trades Union Congress (TUC) is the voice of Britain at work. We represent more than 5.5. million working people in 48 unions across the economy. We campaign for more and better jobs and a better working life for everyone, and we support trade unions to grow and thrive.

The TUC believes that everyone should benefit from the rewards of technology and data, not just employers and technology companies.

Due to their collective nature, trade unions are uniquely placed to contribute to the ethical development and application of artificial intelligence (AI) technology in the workplace, as well as the equitable distribution of power over data.

Unions give voice to the worker experience of workplace technology. This perspective provides an important counterbalance to technological advancement and use of data driven purely by commercial interest.

The TUC advocates regulatory reform to protect and advance worker interests when AI is used at work. In addition, we believe that collective agreements negotiated between unions and employers provide an ideal vehicle for the co-governance of technology at work.

The TUC’s AI project

The TUC has been carrying out a two- and half-year project investigating the use of AI in the employment relationship.

Our evidence to this inquiry is based on the outputs of this project. Therefore, the focus of our evidence is on the governance of the use of AI for the purposes of algorithmic management.

We began our AI project by convening a trade union working group with representatives from our affiliate trade unions, as well as advisers from international union confederations.

All the project outputs including a research and legal report, as well as a manifesto for change, can be found here: https://www.tuc.org.uk/AImanifesto.

We undertook research into the types of technologies being used to recruit and manage people at work and the implications for workers.

We then commissioned a legal report from Robin Allen KC and Dee Masters (Cloisters Chambers). This identified how existing employment law can provide redress for workers when AI at work goes wrong, and the gaps in the law and guidance. Based on the conclusions in this report, we published a manifesto, Dignity at Work and the AI Revolution. The manifesto set out our proposals for legislative reform, as well as statutory guidance.

In the most recent stage of our project, we have focused on the training of union representatives and officials, publishing guidance, including on collective agreements and algorithmic management systems.

How effective is current governance of AI in the UK?

What are the current strengths and weaknesses of current arrangements, including for research?

The UK has in place existing systems of accountability with the potential to provide effective governance of the use of AI at work. However, these systems need substantial review and amendment to adequately protect workers. This should be undertaken immediately as workers are already experiencing both the benefits and harms of AI at work.[1] A longer-term programme of change may then be required, particularly as technologies develop further.


There is currently no UK legislation relating to employment rights that deals specifically with the use of AI at work.

This means that when AI goes wrong at work, workers and unions are reliant on existing employment law in order to seek redress.

For example, existing rights relating to unfair dismissal, the employment contract, data protection (the UK General Data Protection Regulation), equality (the Equality Act 2010), human rights (European Convention on Human Rights) and health and safety.

This legislation provides a system of employer accountability in relation to the use of AI at work to recruit and manage people. Existing rights are of the utmost importance as AI-powered technology has already been rolled out into the workplace, with workers experiencing harms as well as benefits.

However, there are significant gaps in the protections afforded to workers and so we believe the current system of legal governance to be inadequate.

We have outlined the gaps in provision below, along with our suggestions for how best to improve the existing legislative framework with amendments and additional statutory guidance.


There is no single regulator dedicated to enforcing rights relating to the use of AI at work, or indeed more generally. We believe that extensive cross-regulator collaboration is required for the effective governance of the use of AI at work and that this is not currently taking place.

In the employment sphere, relevant regulators include the ICO (Information Commissioner’s Office), the EHRC (Equality and Human Rights Commission), the Employment Agency Standards Inspectorate and HMRC’s National Minimum Wage Team. There may also be benefits to collaboration between employment-focused regulators and the Digital Regulation Cooperation Forum, as well as the Intellectual Property Office.

In addition, we highlight the important role of other organisations such as ACAS (Advisory, Conciliation and Arbitration Service) and the CDEI (Centre for Data Ethics and Innovation). Although not regulators, these organisations have a vital role to play in providing guidance on the use of AI at work in consultation with civil society, regulators and others.


Consultation with workers and their representatives before and after the introduction of AI into the workplace should be a cornerstone of AI governance. However, there is currently very little consultation taking place. This is despite there being relevant statutory consultation rights in place, in particular for recognised trade unions.

Therefore, we conclude that there are both problems with enforcement, as well as gaps in the existing provisions for consultation.

TUC polling commissioned from BritainThinks (which conducted an online survey of 2,209 workers in England and Wales between 14th and 20th December 2021) revealed that only 39 per cent of workers agreed with the statement “staff at my workplace are consulted when new forms of tech are introduced”.

Usdaw union polled 3000 members earlier this year (USDAW represents a range of different workers, including those in retail, road transport, warehouses and distribution- https://www.usdaw.org.uk/Join-Us/Who-can-join-Usdaw). Nine in 10 workers reported that their employer had failed to consult on the implementation of new technology.

Consultation levels appear to be low despite the existence of important statutory consultation rights, such as those relating to data protection impact assessments (Article 35 UK GDPR), consultation rights under the Information and Consultation Regulations (ICE), the Safety Representatives and Safety Committee Regulations 1977 and the Health and Safety at Work Act 1974.

While the consultation requirements under ICE are limited to specific scenarios, health and safety legislation provides important consultation rights for recognised trade unions.

Data protection impact assessments provide for an important process that must be followed where  data processing involves risk to the rights and freedoms of a data subject, and where automated processing (including profiling) significantly affects the data subject (Article 35.3 UK GDPR). Article 35.9 of the UK GDPR sets out that the data controller should seek the views of data subjects or their representatives on the intended data processing. In addition, Recital 71 of the GDPR elaborates that automated processing should be subject to suitable safeguards which arguably include consultation with unions.

Prospect union has published guidance on the use of DPIAs[2] and the ICO recommends seeking the views of individuals and their representatives.[3] However, we understand from our affiliated unions that many employers are not aware of the obligations under Article 35 UK GDPR. In addition, there is no requirement to carry out an equality impact assessment which we believe to be fundamental to the effectiveness of the assessment.

Usdaw’s survey found that three quarters of workers believe that better consultation would make technology more effective in their workplace.

Collective bargaining

Our view is that collective bargaining provides an ideal system of co-governance for the use of AI at work.

Indeed, where there is a recognised trade union in a workplace, there will likely be a statutory duty to negotiate on this in accordance with S.178 of the Trade Union and Labour Relations (Consolidation) Act 1992 (TULRCA).

Collective agreements are a flexible vehicle through which workers, employers and unions can agree terms relating to the development, procurement, application and use of AI at work.

These terms can be tailored to suit a particular sector of work, taking into account sector specific technologies and issues. This is fully explained and illustrated in our guidance on algorithmic management systems and collective bargaining, People Powered Technology.[4]

However, there is currently a low level of bargaining coverage rate in the UK. The OECD evaluated this coverage as 26.9per cent of employees with the right to bargain in the UK in 2019.[5]

The OECD report, Shaping the Transition[6] highlights the importance of social dialogue, concluding that “social dialogue can ease technological transitions, but faces general challenges” and suggests that “policymakers could consider promoting consultations and discussions on the AI transition with social partners and other stakeholders. They could also support social partners’ efforts to expand their membership to non-represented forms of work and employers like in the platform economy, as well as promote AI-related expertise, and digital education more generally, in the workplace for management, workers and their representatives”.

We advocate more government support for sectoral collective bargaining, along with a union right to workplace and digital access, in order to strengthen unions and collective bargaining as a system of governance at work.

Our affiliate unions Prospect and Community have both recently published guidance on the importance of trade union consultation and new technology collective agreements.[7]


Standardisation provides a form of governance of AI. However, we do not believe this can be effective without full involvement from civil society in the standardisation process. Currently, standardisation is industry-dominated and there are many barriers to civil society, including trade unions, contributing to the standard setting process.

What measures could make the use of AI more transparent and explainable to the public?

Existing legislation could be amended quicky and easily to improve transparency and explainability for workers.

To improve transparency, employers could be required to provide information about how AI is being used in the workplace within the statement of particulars that employers are required to give workers under Section 1 of the Employment Rights Act 1996.

Employers could also be required to maintain a register containing the same information and to keep this register up to date, making it available to employees, workers and job applicants.

In order to ensure that technology is understandable we suggest that UK data protection legislation is updated to include a universal right to explainability in relation to AI or ADM systems at work, so that it is easy to understand how the system operates in a general sense.

In addition, the UK GDPR should be amended to include a right to a personalised explanation, setting out in an understandable way how an AI-powered system has been applied to the individual in the workplace, along with a readily accessible means of understanding when these systems will be used.

We also believe it is important to ensure that no international trade agreement impedes transparency with intellectual property rights, leading to the undermining of the protection of employees and workers’ rights.

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

There should be a right to human review of decisions made at work by artificial intelligence. This right should be a comprehensive and universal right as the current right to human review of automated decision making under the UK GDPR is a limited one.

We are also of the view that there should be a statutory right to in person, face-to face engagement when important decisions are being made about individuals at work.

Trade unions should also have the opportunity to review and scrutinise algorithmic decision-making processes. This could be provided for in a collective agreement, with the work being undertaken through a technology committee with worker representatives. The parties could agree an algorithmic impact assessment, the terms of which could also be set out in a collective agreement.

Are current options for challenging the use of AI adequate and, if not, how can they be improved?

Trade unions play an important role in ensuring that workers can challenge AI decision making and seek redress when AI decision making goes wrong. This can be achieved through workplace representation during internal grievance procedures.

However, in the research stage of our project we identified significant barriers that workers face to challenging decision made by AI. These include factors such as the employer assuming that technology must be right, and difficulties experienced by both workers and employers in communicating effectively about the technology.

Training for managers, union reps and workers is essential to equip them with the skills and knowledge necessary to challenge the use of AI. In the Future of Work APPG’s report The New Frontier: Artificial Intelligence at Work[8], the APPG recommends a social partnership fund to provide funding for union-led training and the TUC’s AI project.

Similarly, there is significant awareness -raising that could be done about existing processes such as DPIAs to ensure that employers follow this where required.

Collective agreements can provide for structures in organisations through which the use of AI can be scrutinised and challenged, for example, through a technology committee with worker representatives and an agreed process for consultation, knowledge sharing and challenge. We advocate more support for collective bargaining and unions to facilitate this process.

Frequently, workers are simply unaware of AI being used in relation to them and this means they are therefore unable to challenge decision made by AI.

In addition, remedies for breaches of existing legislation (for example, breaches of the UK GDPR) are extremely difficult for individuals to enforce, particularly without professional advice, support and adequate time and money.

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

We advocate the immediate implementation of the steps set out in our manifesto, Dignity at Work and the AI Revolution.[9] Our pragmatic programme of reform involves amendment of existing legislation along with other measures such as statutory guidance, new statutory rights and employment-focused ethical principles.

These steps can be achieved quickly. As has been widely documented, harms caused by AI are already taking place in the workplace and so there is a need to act quickly to close the gaps in the law and guidance.

We believe that all of the regulators we have listed above have a role to play in the regulatory oversight of AI in the employment sphere. In the circumstances, we emphasise the importance of cross regulator collaboration. In particular, we highlight the important expertise and focus of the EHRC and the important perspective the organisation will bring to the work of other regulators in the field of AI. We also highlight the expertise of ACAS in producing valuable guidance relating to employment-based issues.

At the moment, as highlighted above, there appears to be inadequate enforcement of legal rights by regulators such as the ICO. We emphasise the need to fully resource all relevant regulators.

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

There is no legal framework in the UK that is specific to the use of AI at work. The existing employment law framework does provide some protections for workers, but there are significant gaps in the legislation and guidance. Our legal report (commissioned from Robin Allen QC and Dee Masters) gives a detailed analysis of the position,[10] but we set out below a summary.


The Equality Act 2010 has provisions that can apply to discriminatory use of AI, but this is only effective if claimants have full transparency of how the technology has operated.

The nature of the AI value chain, with different actors influencing the technology at development, procurement and application, means that it is difficult for a potential claimant to work out who is responsible for the discriminatory operation of an AI-powered system. This hinders enforcement of rights as under the Equality Act liability goes to the employer and not other actors in the value chain.

Transparency and explainability

The current legal framework does not require a high enough level of transparency and explainability in relation to AI at work as there is no universal right to explainability, or to a personalised statement of explanation.


Article 8 of the European Convention on Human Rights provides potentially effective protection against technology intruding on privacy. However, there is an absence of legally binding guidance for employers on the application of Article 8 to scenarios involving technology, and how to balance employer interests and the rights of individuals to a private life.

Data protection

The UK GDPR provides important protections for workers when AI is used at work. It is vital that these rights are not diluted (see recent government proposals for a draft Data Protection and Digital Information Bill). We argue the UK GDPR provides an important foundation on which to add additional rights, notwithstanding that some of the existing provisions require clarification.

For example, whilst the UK GDPR restricts the processing of data to specific grounds, it is important that these are clarified to ensure maximum protection for workers (clarification of the “necessary for the performance of an employment contract” and “legitimate interests” tests.

There are some valuable rights to information and protection against automated decision making in Articles 21 and 22 of the UKGDPR, but these are subject to numerous exceptions which we believe to be insufficiently defined.

Unfair dismissal

Unfair dismissal law provides effective protection against unfair algorithmic dismissal based on inaccurate information. However, unfair dismissal rights are far from universal, with anyone who does not have two years' service, or is not an employee, being excluded from protection.

Work/home boundaries

There is no statutory right to disconnect from digital devices in order to maintain effective work/home boundaries in the digital age.

Collective bargaining and consultation

Although there are some existing relevant provisions for mandatory consultation of recognised trade unions (see references above to Health and Safety legislation and the TULRCA 1992), and other forms of consultation, we strongly believe these do not go far enough. There is not any express provision for mandatory consultation of trade unions before AI and automated decision-making systems are introduced into the workplace.

Trade union activities

Although there is effective legislation in place to protect trade union activities, this is only effective in so far as unions and workers can access understandable information about how technology is operating in relation to union-related activities at work.

Intellectual property rights and performance

We highlight the current inadequacies in intellectual property law in relation to individual's rights (including those of freelance performers and educators) over their performances and other creative outputs when synthesised by AI. We refer you to the campaign conducted by Equity union, Stop AI Stealing the Show and the demands in their manifesto.[11]

Is more legislation or better guidance required?

Yes, as we set out in our manifesto, we are strongly of the view that the existing legislation requires amendment and reform (for the reasons outlined above), as well as additional statutory guidance.

We refer you to our manifesto for the full detail of our proposals [12], but summarise here the suggestions we have not yet dealt with in our evidence:

Worker voice

        a statutory duty to consult trade unions when introducing AI and automated decision making into the workplace


        write into data protection legislation that discriminatory data processing is always unlawful

        reverse the burden of proof in discrimination claims relating to AI

        route liability for discrimination up the AI value chain

        ensure no worker is subject to a detriment following inaccurate processing of data

        mandate for equality impact assessments in data protection impact assessments

        new statutory guidance on avoiding discrimination as a result of use of AI

Work/home boundaries

        A statutory right to disconnect

Data protection

        Statutory guidance on the operation of Articles 6, 21 and 22 of the UK GDPR.

Data rights

        Data reciprocity, giving workers the ability to gather and collectivise their data

Employment-focused ethical guidelines

        Employment - focused ethical guidelines developed in collaboration between multiple stakeholders, including the TUC.

What lessons, if any, can the UK learn from other countries on AI governance?

Several countries are now advancing AI-specific legislation.

We are of the view that it is important to adopt a context-specific approach, rather than a generalised one when legislating in relation to AI.

Whilst we believe the EU AI Act to be an important step in legislating for AI, the EU experience of drafting this Act demonstrates the need for employment-focused legislation when addressing issues relating to AI.

Some examples of AI-related employment-focused legislation recently passed in other countries include the “Riders law” in Spain [13]which has provisions on algorithmic transparency and legislation recently passed in New York state in accordance with which AI tools will be prohibited in employment decisions unless a bias audit has taken place, and the use of the AI system is disclosed.

In Germany, works councils play a key role in the governance of AI at work, ensuring the active consultation of workers. The Germany Works Council Modernisation Act provides for employers bearing the cost of technology experts to advise the works council where required.

The Canadian government has implemented a directive on automated decision making with mandatory impact assessments.[14] The process was agreed through consultation with academia, civil society, and other public institutions.

Finally, we refer the committee to the OECD’s report Shaping the Transition: AI and Social Dialogue[15] which provides an analysis of the role of social dialogue across OECD countries and highlights how social dialogue can help to foster inclusive labour markets and ease technological transitions”.


(November 2022)

[1] Please see our research report for information on the range of implications for workers, Technology Managing People: Technology managing people - The worker experience | TUC

[2] Data Protection Impact Asessments: A Union Guide, Prospect: https://prospect.org.uk/about/data-protection-impact-assessments-a-union-guide/

[3] How Do We do A DPIA? ICO Guide: How do we do a DPIA? | ICO

[4] People Powered Technology, TUC Guidance : People-Powered_Technology_2022_Report_AW.pdf (tuc.org.uk)


[5] UK Main Indicators and Characteristics of Collective Bargianing, OECD collective-bargaining-database-unitedkingdom.pdf (oecd.org).

[6] Krämer, C. and S. Cazes (2022), "Shaping the transition: Artificial intelligence and social dialogue", OECD Social, Employment and Migration Working Papers, No. 279, OECD Publishing, Paris, https://doi.org/10.1787/f097c48a-en Shaping the transition: Artificial intelligence and social dialogue | en | OECD

[7] Digital Technology, A Guide for Union Reps, Prospect Digital technology: guide for union reps | Prospect and Technology Agreements: A Partnership Approach to the Use of Technology At Work, Community Workers and technology - Community Trade Union (community-tu.org).


[8] The New Frontier: Artificial Intelligence At Work: MPs call on the UK Government to take urgent action on AI accountability — APPG on the Future of Work (futureworkappg.org.uk)

[9] Dignity At Work and the AI Revolution, A TUC  Manifesto  https://www.tuc.org.uk/research-analysis/reports/dignity-work-and-ai-revolution .


[10] Technology Managing People: The Legal Implications Technology_Managing_People_2021_Report_AW_0.pdf (tuc.org.uk),

[11]  Stop AI Stealing The Show, Equity: Ai booklet.indd (equity.org.uk)

[12] Dignity At Work and the AI Revolution, A TUC Manifesto : Dignity at work and the AI revolution | TUC,

[13] BOLETÍN OFICIAL DEL ESTADO Disposicion 7840: Disposición 7840 del BOE núm. 113 de 2021

[14] Responsible Use of AI, Canadian Government: Responsible use of artificial intelligence (AI) - Canada.ca.

[15] Krämer, C. and S. Cazes (2022), "Shaping the transition: Artificial intelligence and social dialogue", OECD Social, Employment and Migration Working Papers, No. 279, OECD Publishing, Paris, https://doi.org/10.1787/f097c48a-en Shaping the transition: Artificial intelligence and social dialogue | en | OECD