Written evidence submitted by the Institute for the Future of Work
Institute for the Future of Work
The Institute for the Future of Work (IFOW) is an independent research institute which explores how new technologies are transforming work and working lives. We research and develop practical solutions to promote people’s future wellbeing and prosperity.
IFOW was founded in 2018 by Nobel prize winning economist Sir Christopher Pissarides, technologist Naomi Climer CBE and employment barrister Anna Thomas. We work at the intersection of government, industry and civil society to shape a fairer future through better work.
IFOW have published a range of research on how algorithmic systems, often drawing on online information, are impacting access to, terms and conditions of, and quality of work. In May 2021 IFOW published ‘The Amazonian Era: How algorithmic systems are eroding good work’ examining how the ethos, practices and business models that emerged within the gig economy have been packaged up as algorithmic management systems and are now being used in a broad range of businesses across the UK[1]. This revealed a range of impacts, including harms to workers, driven by organisational change associated with the deployment of algorithmic systems.
Prior to this, in October 2020, IFOW published ‘Mind the Gap: The Final Report of the Equality Task Force’[2]. This came from a cross-disciplinary Equality Task Force, chaired by Helen Mountfield QC, and highlighted the gaps in our current regulatory frameworks for mitigating the harms of Machine Learning, drawing from online information, to equality.
Introduction
The Online Safety Bill establishes a new regulatory regime to address illegal and harmful content online, with the aim of preventing harm to individuals in the UK. The focus of the draft Bill is social media platforms and regulating user-generated content. We believe that the narrow focus on social media platform fails to take into account a large number of platforms which can cause people harm, particularly platforms operating in the workplace. The draft Bill also focuses exclusively on harmful content without any consideration for the impact of harmful decisions.
While the Government have recognised the role of algorithms in online harms in the Online Harms White Paper[3], they have failed to consider the impacts of automated decision making and the way predictive classification drives these processes. As a result of this, accountability for algorithmic decision making has not been included in the draft legislation.
As drafted, the Bill does not take into account, or properly take into account, the impacts of technological changes in the workplace. The Governments white paper notes that behavioural tools, like ‘likes’ can be powerful tools to keep people online. These same nudge-based methods are used in work-based platforms to incentivise people to continue working, we refer to this as gamification. This type of behaviour should have been taken into consideration and included in the legislation.
We agree that the digital economy urgently needs a new regulatory framework to improve our citizens’ safety online. We endorse the ambitions to develop a culture of transparency, trust and accountability within the regulatory framework.
There is growing recognition of the need to ensure accountability for algorithmic systems, which are the engine of online platforms: the subject of regulation in the draft Bill. The Bill highlights the risks and harms to individuals and society arising from platforms which mediate content within what is increasingly understood as an online ‘public realm’, via major social media firms.
The draft legislation fails to recognise and appreciate the increased use of platforms at work, in the ‘private’ realm, which source information from online platforms and the cross-over between the public and private realms. This information can inform algorithmic decisions about access to, terms and conditions and quality of work. In turn such systems operate in private businesses, but drive harms to citizens.
We believe this Bill offers the opportunity for the Government to address the urgent issues that have arisen in AI adopted in the workplace. This fits well within the scope of the legislation and our research, particularly ‘The Amazonian Era: How algorithmic systems are eroding good work’ illustrates the urgent need for legislative reform to address this specific type of online harm that is having a severe impact on the lives of workers in a broad range of industries[4].
In the next section we move on to address some of the specific questions you have outlined in the terms of reference for the call to evidence.
Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill, and what should it be?
We believe that there should be an explicit definition of ‘harms’ within the Bill that this should include harms impacting workers, which promotes human centred automation and takes into account equalities implications of automated decisions.
The Government’s White Paper on online harms suggests that in scope is all online content or activity that ‘threatens our way of life in the UK either by undermining national security, or reducing trust and undermining our shared rights responsibilities and opportunities to foster integration’.[5] It therefore stands that the equality impacts of decision making, which are core to our shared rights are within scope of what the Government describe as ‘online harms’.
As the use of online systems, and automated decision making, proliferates, many online harms are only now beginning to be explored and understood. We have identified harms from the use of algorithmic systems, operating via the internet and mediated by platforms, across ten core principles of good work which IFOW established in our ‘Good Work Charter’[6]. These harms are economic, social and material. We believe the process to determine harm caused by online systems should be via Algorithmic Impact Assessments. We propose the Good Work Charter could be a checklist incorporated in the Bill, or a schedule to it, to identify important areas of potential impact.
Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?
The Institute for the Future of Work have identified the importance of creating a pre-emptive corporate duty to perform Algorithmic Impact Assessments. Though the duty we propose will apply to the corporations using this type of algorithmic technology it will apply pressure to tech companies to consider the potential harms caused by the technology they are developing.
In IFOW’s ‘Amazonian Era: How algorithmic systems are eroding good work’ we interview developers of ‘connected worker platforms’, platforms that manage works through algorithmic systems. The developers we spoke to said that the people the systems they were developing would affect were never mentioned in discussions during development, instead the focus was on ‘perfecting systems’[7]. The corporate duty we have outlined would force developers to consider the people impacted by their platforms during development in order to keep the corporations they serve happy with the product they have provided. Please see Part 5 of the report attached for further information.
What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?
The Institute for the Future of Work believe that the Bill should take into account a larger number of platforms which can cause people harm, particularly platforms operating in the workplace. These harms must be addressed, rather than ignored, so that data-driven technologies and AI can then fulfil its potential and works in the public interest. We say this with reference to the harms caused by caused by algorithmic systems in such as ‘connected worker platforms’. This would mean focusing regulation on harmful decisions, decision-making algorithms and their impacts, rather than exclusively looking at harmful content.
The draft Bill fails to address the significant online harms posed to workers and should include workplace algorithmic accountability measures. This should include a new corporate duty to produce pre-emptive Algorithmic Impact Assessments, a new transparency duty for workplace AI, a new duty to provide workers with ‘full explanation’ on workplace AI, a new individual right for workers to be ‘involved’ in decisions around the introduction of AI and collective rights for unions to exercise all these duties. The failure to acknowledge the harms caused to workers through the lack of regulation around AI in the workplace is a significant omission, particularly given how many people are harmed.
What are the lessons that the Government should learn when directly comparing the draft Bill to existing and proposed legislation around the world?
There are global examples of action being taken to better regulate the impacts of algorithmic systems in the workplace. The Canadian Government have taken steps to address the negative equalities implications in the public sector by mandating Algorithmic Impact Assessments.[8] To facilitate this, they have developed a questionnaire that determines the impact level of an algorithmic systems and make suitable adjustments. This work is ongoing.
In addition to the Canadian example, there are also examples of algorithmic accountability for public sector organisations in the Netherlands, France, New Zealand and Chile, and at a local level in Amsterdam and New York City. There is increasing consensus that clear goals, binding legal frameworks, defined objects of governance, and the involvement of those affected, as well as shared terminologies, will achieve the best outcomes.
Setting the appropriate scope of policy application supports their adoption. Existing approaches for determining scope such as risk-based tiering will need to evolve to prevent under- and over-inclusive application, transparency reports must be detailed and audience appropriate, policies should prioritise public participation and finally that regulation benefits from institutional coordination.
The evidence base provided for this work internationally provides a good foundation for UK legislators.
Conclusion
To conclude, the Institute for the Future of Work believe that this legislation would be improved by expanding the scope of the Bill to include provisions that consider a broader range of platforms and address harmful decisions as well as harmful content. Algorithmic accountability should be embedded within this legislation, specifically in the form of mandatory Algorithmic Impact Assessments aimed at pre-emptive action and raising standards of practice.
[1] https://www.ifow.org/publications/the-amazonian-era-how-algorithmic-systems-are-eroding-good-work
[2] https://www.ifow.org/publications/mind-the-gap-the-final-report-of-the-equality-task-force
[3] https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper
[4] https://www.ifow.org/publications/the-amazonian-era-how-algorithmic-systems-are-eroding-good-work
[5] https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper
[6] https://www.ifow.org/publications/the-ifow-good-work-charter
[7] https://www.ifow.org/publications/the-amazonian-era-how-algorithmic-systems-are-eroding-good-work
[8] https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592#cha6