Written Evidence Submitted by the Institute for the Future of Work

(GAI0063)

 

The Institute for the Future of Work (IFOW) is an independent research and development institute exploring how new technologies are transforming work and working lives. We are submitting evidence to the Governance of AI consultation as we believe that the deployment of AI across the economy, particularly the workplace, has and will continue to pose new risks to individual rights unless novel legislative measures are implemented.

 

What measures could make the use of AI more transparent and explainable to the public?

              Methods of AI assurance should be used by those who design and deploy AI systems that have direct impacts on peoples' lives, particularly those that have implications for individual rights and wellbeing, such as AI in the workplace. Assurance is important for public trust and understanding of these systems. Systems that are high risk, such as those that determine access to or terms and conditions of work, should always be subject to assurance governance mechanisms.

              Prior to deployment of an AI system, algorithmic impact assessments (AIA) can be used to forecast and analyse risks of systems. In previous work, we have proposed a four stage process which involves 1) identifying individuals and communities who might be impacted, 2) undertaking an ex ante risk and impact analysis, 3) taking appropriate action in response to the analysis, 4) continuous evaluation to ensure assessment and action is ongoing and responsive. AIAs can be conducted more than once as the system is modified and updated and as a process of monitoring and forecasting harmful impacts of the system.

              After deployment of an AI system, audits can be used to uncover and analyse harmful impacts that have occurred or are likely to occur based on how the system has been deployed. At present, many commercial auditing methods are statistical.

It is important to note that AIAs and audits ought to be sociotechnical, which is to say that they perform not only a quantitative assessment of the technical aspects of relevant impacts such as fairness and accuracy, but also a qualitative assessment of the social implications of these factors. IFOW's Good Work Charter provides a list of the ten rights needed for 'good work', based on a review of legislative and ethical bases, and represents an example of a normative framework that can be used to structure sociotechnical assessments.[1]

To take the example of the potential for bias and discrimination, fairness is widely understood in the society and technology studies community to be contextual in nature. The definition of fair treatment and fair outcomes for individuals and groups will vary according to the goods and services in question. It is for this reason that standardised, statistical measures of fairness cannot be applied universally across the board to different AI applications which could have wildly different implications for fairness.

For instance, the importance and operationalisation of fairness in the finance sector, the workplace or for search engine outputs will demand different levels of rigour and respect for fundamental rights given the differences in scope of application and potential harmful impact. The balancing of rights between employer and employee demonstrates the contested nature of fairness and the challenge of arbitrating between various rights holders in access to the benefits and freedom from the harms posed by technologies.

The same holds true for explainability, another important sociotechnical aspect of AI systems. There is no objective threshold for when an AI system is considered 'explainable', as this feature is non-binary in nature and exists on a spectrum of being explainable: being understandable to humans who operate and are affected by the system. The contents of a valid explanation for decision recipients will vary according to the implications of the decision.

In the context of the workplace, we believe that systems used to make decisions which determine access to, terms and conditions of, or quality of work ought to be explainable to the extent that accountable agents using them understand how decisions have been reached, and workers ought to understand the functioning of systems sufficiently to exercise their rights in the workplace.

Employers should disclose key information about the functioning of AI systems, such as the nature, purpose and scope of the system, the outputs produced by the systems (eg. recommendations, employee scores), and how to access further information, contest automated decisions or provide feedback. A more comprehensive model of disclosure, such as for more advanced systems with a significant impact on work and working lives, would include more granular disclosures, for instance the inputs, criteria, variables, correlations and parameters used by the systems in producing those outputs, the logic used by the systems to produce their outputs, including but not limited to weightings of different inputs and parameters, if and how the system is operated by third parties (eg. algorithmic hiring providers separate from the employer), and relationships of accountability in the organisation and beyond for AI harms.

Explainable systems and accompanying transparency from the decision makers (employers) ought to facilitate informed decision-making in an environment characterised by  power asymmetries and is essential given the significant implications of AI-powered decision making for material and psychological wellbeing in the workplace. This will require new rights and duties to supplement existing protection under the GDPR, as IFOW proposes in our reports.[2]

 

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

        Are current options for challenging the use of AI adequate and, if not, how can they be improved?

 

              Current options for challenging the use of AI, particularly in the workplace, are principally based on data protection rights, however a range of other legal, regulatory and ethical codes are relevant to the governance of AI in the workplace (see Charter Review). Article 22 of the GDPR serves as the basis of protection from harms caused by automated decision making, the scope of which includes AI systems. Article 35 proposes involvement of data subjects in decisions about design. However, both these articles are  currently under review. This principally owes to confusion as to what constitutes ‘meaningful’ human review. The method we propose for AIA seeks to overcome this by creating an enduring infrastructure for human oversight, from before deployment and through the lifetime of an AI System within the workplace.

              AI applications reliably involve human decisions in design and deployment, and there is a need for greater accountability for the implications of these choices. However, there remains a lack of targeted legislation to require these broader forms of assessment and address the harms of semi-automated decision making. For example, algorithmic hiring systems, currently widely deployed by Fortune 500 companies, represent semi-automated systems that assess candidates for a role and leave the final hiring decision to a human. For the time being, this process still requires human oversight, but a final say in the decision does not guarantee protection from bias and discrimination, lack of transparency, lack of explainability, or lack of justifiability, such as through the use of pseudoscientific hiring assessments (eg. 'personality').

              Rigorous procedures of stakeholder engagement and AI assurance will be required for public trust and the equitable and evidence-based design of AI systems. The current data protection regime presents a lacuna in governance and certainly does not provide adequate options for the challenging of the use of AI. The notion of an individual-rights based regime that relies on a tort model of litigation also serves as a disincentive and a barrier for individuals in raising attention to and receiving compensation for experienced harms, particularly if these individuals belong to vulnerable groups such as workers, who face an inherent power imbalance in the workplace.

              Traditional divisions of responsibility between the public and private sector are also being broken down by the growth of public-private partnerships in the deployment of new technologies. Currently, a duty to consider equality proactively and to design systems with equality in mind is in place for the public sector, in the form of the Public Sector Equality Duty. We believe that given the extent of data collection and the potential severity of harms perpetuated by AI systems in critical decision contexts, such as the workplace, proactive equality duties ought to be mandated for private sector decision makers as well.

              Any serious attempt to review the validity of AI-assisted decisions will need to be based on systematic, ongoing monitoring and impact assessment, as opposed to an individualised model of reporting, and apply across public and private sector organisations.

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

The DRCF provides a promising start, although we believe that there is not enough discussion of and protection for workers in the remit of the algorithmic processing workstream of the DRCF, which largely focuses on harms to citizens and consumers.The Health and Safety Executive, new Single Enforcement Body for Employment and Equality and Human Rights Commission should be included in the DRCF, or alternatively consulted and work as closely as possible with the DRCF, to ensure impacts on work and people are properly understood and taken into account, as well as meaningful accountability and redress for any harms.

As an example, the DRCF ought to provide clear and singular guidance for third parties, signed off by all the regulators rather than different pieces of guidance from different regulators. Although it is understood that the regulators in the DRCF span different remits and would thus require different types of audit,  the DRCF should, where possible, provide single pieces of guidance on the procedural requirements for audits; for example, how they should be conducted across the supply chain, how stakeholders should be consulted and notified, the processes for triggering external or internal audits, how audits should be reported on to the regulator, the public or otherwise, and so on.

While the content of the audits themselves are understood to vary according to the differing regulatory requirements of each regime, it is possible for there to be more consistency and uniformity, certainly in procedure, but also in substance too. Cross-cutting, minimum ‘red lines’ and goals are needed and should be spelled out in primary legislation. Guidance at a regulator and sectoral level providing detail should be developed and updated over time.

 

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

        Is more legislation or better guidance required?

See above. A new overarching framework for responsible innovation and governance will be required, with additional protection for the workplace.

 

What lessons, if any, can the UK learn from other countries on AI governance?

              Regulation of AI harms requires a careful balance between general-purpose, omnibus legislation and a targeted, sectoral approach addressing the unique demands and harms of AI deployment in a given industry or socioeconomic space. We list below several examples of omnibus and sectoral legislation from North America and the EU that propose sensible, targeted interventions aiming to increase the accountability and transparency of AI-assisted decision making either in general contexts or in the workplace. Many of these are in the proposal stage but some have been enacted and are drawn from our Policy Tracker of international developments in AI workplace regulation.[3]

              Common principles and analogous harms targeted by these legislations include fairness, explainability, accuracy, transparency, and accountability. We believe that the principle of human-centred AI, or consideration of the public good, is another important addition.

              A few common governance mechanisms include the use of assurance, such as AIAs and bias audits, requirements for meaningful human oversight, mechanisms for decision correction both by accountable agents and affected persons, mechanisms for appeal and feedback, transparency and disclosure of key information on AI systems to decision recipients and/or the public, and the restriction or banning of particularly harmful applications.

 

Sectoral legislation

        2020 Labor and Employment - Use of Facial Recognition Services - Prohibition (Maryland)

        prohibits employers from using facial recognition in interviews unless written consent is provided by the candidate

        2020 Automated employment decision tools (NYC)

        requires employers to conduct bias audits and disclose in advance the existence and assessment features of automated employment decision tools

        2022 Workplace Technology Accountability Act (California)

        places restrictions on the use of automated decision-making in the workplace, and mandates the disclosure of certain information by the employer to employees when these systems are deployed

        2021 Platform Work Directive (EU)

        proposes novel wide-ranging rights for platform workers, with provisions on employment status, algorithmic management, human oversight of automated decisions and human review of significant decisions. Remedies for platform workers are also proposed, including rights to redress, protection from dismissal and communication channels amongst workers

 

Omnibus legislation

        Stop Discrimination by Algorithms Act of 2021 (DC)

        focuses specifically on algorithmic decision making with 'adverse action', defined as a denial, cancellation, or other adverse change or assessment regarding an individual’s eligibility for, opportunity to access, or terms of access to important life opportunities

        2022 AAA (USA)

        regulating automated decision-making that has any legal, material, or similarly significant effect on consumer lives

        although aimed at consumers and is thus enforced by the Federal Trade Commission, such general purpose legislation will certainly have impacts on the world of work and names employment as a critical area of concern

        2019 Directive on Automated Decision-Making (CAD)

        to ensure that automated decision-making systems used in the public sector are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions

 

 

(November 2022)


[1] Institute for the Future of Work. ‘The Good Work Charter’, 2018. https://uploads-ssl.webflow.com/5f57d40eb1c2ef22d8a8ca7e/6156c8af4a21842f7ef19680_IFOW%E2%80%93The-Good-Work-Charter.pdf.

[2] Stephanie Sheir et al., ‘Algorithmic Impact Assessments: Building a Systematic Framework of Accountability for Algorithmic Decision Making’ (Institute for the Future of Work, November 2021); Reuben Binns et al., ‘Mind the Gap: How to Fill the Equality and AI Accountability Gap in an Automated World’ (Institute for the Future of Work, 2020).

[3] Sheir, Stephanie. ‘Tracking International Legislation Relevant to AI at Work’. Institute for the Future of Work (blog), n.d. https://www.ifow.org/publications/legislation-tracker.