Context
1. This submission is made on behalf of Robin Allen QC and Dee Masters. We are practising barristers in the equalities field based at Cloisters in London. In 2019, we formed a legal consultancy which focuses on the development of AI within an equality, data protection and human rights framework. Most recently, we produced a training programme for the Council of Europe and CDEI for UK regulators on the equality implications of new forms of technology alongside an open opinion for the TUC on ensuring that worker rights are protected post pandemic in a world increasingly reliant on technology. More information about our work is here.
2. Whilst we advise organisations about the use of AI so as to ensure that it complies with the existing legal framework, we also take a keen interest in ensuring that regulation in the UK is further developed so as to ensure that “good” AI flourishes. Our overarching philosophy is that public actors and private companies need clear, pragmatic and effective regulatory frameworks because it provides a safety net within which “good” AI can be developed whilst also protecting the fundamental rights of the public. More information about this approach is available here.
Use of technology in the UK
3. One real problem in the UK is the lack of transparency concerning the use of technology in decision making. However, we are aware of technology being used in the application of the law in relation to decisions concerning the level of support which self-employed individuals could receive under the Self Employment Income Support Scheme (SEISS) which was a grant introduced follow the pandemic. More information about this scheme and the use of technology within it can be found here. Please note that the Court of Appeal will be hearing an appeal in relation to the SEISS shortly.
4. We are also aware of Automated Decision Making (ADM) being used in the administration of the Settled Status scheme and as part of local government Risk Based Verification (RBV) in relation to eligibility for housing benefits. Our legal opinion in relation to the equality implications of these uses of technology can be found here.
5. We are also aware of Machine Learning (ML) algorithms being used in the criminal justice system in terms of making predictions about the risk of individuals re offending and by the police through the deployment of Facial Recognition Technology (FRT). We refer you to our report entitled “Regulating for an Equal AI: A New Role for Equality Bodies” produced for Equinet (the European Network of Equality Bodies) which is available here. We summarise the use of algorithms, AI and ML in the UK at pages 91 to 95 alongside the rest of Europe.
The utility of technology
6. We believe strongly that technology can and should be embraced to achieve public goods. Undoubtedly, technology brings with it enormous potential benefits. For example, Facial Recognition Technology deployed in airports at passport gates can be used to quickly allow entry to passengers and thereby avoid queues and the spread of Covid.
7. But, there are always “red lines” which should never be crossed regardless of the utility of technology when it comes to the application of the law. Organisations must be careful to remember that simply because a tool is useful, it does not mean that it is lawful. Of particular importance is the requirement – contained in the Equality Act 2010 – that algorithms, AL and ML algorithms must not discriminate in relation to the protected characteristics (sex, age, disability, sexual orientation, etc etc). This “redline” should never be crossed. Moreover, ensuring that “redlines” are protected will create trust which in turn allows good uses of technology to flourish.
Equality implications
8. In our work we have had the opportunity to study some uses of technology when it comes to the application of the law and have identified serious concerns about their ability to comply with the Equality Act 2010. We refer you to our open legal opinion for the TLEF here where we discussed the equality implications of using algorithms to administer the Settled Status scheme.
9. It should be noted that a lack of transparency around the use and application of these technologies is particularly problematic when it comes to assessing any equality implications.
Better regulation
10. In the EU, there is already a commitment to better regulating AI so as to ensure that the benefits of technology can be realised without sacrificing fundamental rights. It has proposed an AI Regulation. Inevitably, any standards introduced within the EU will have an indirect effect on the UK as businesses with a pan European reach develop products which comply with EU laws.
11. The Council of Europe is also keen to regulate and the UK remains a member. More information about its specialist body CAHAI is here. The Council of Europe commissioned us to write a training programme for regulators for use in all 47 member countries. This programme was delivered in the UK at the beginning of 2021 and is currently being delivered in France and Spain. The aim of the programme has been to ensure that all kinds of regulators are aware of the implications of AI in their field of regulation and in particular to be aware of the kinds of discrimination that can arise from the wrong use of AI.
12. In the course of this work we have become increasingly aware of the lack of understanding by regulators and the general public of the way in which AI systems are being used in their field of activity. This is a particular kind of problem of transparency. We call it the problem of observability. In short many members of the public as well as regulators do not understand the extent to which AI systems are being used to shape important decisions about and for them. This does not mean that such systems are necessarily wrong but if they are not understood then the assurance that they comply with legal norms depends on the degree to which developers and users understand the ethics and legality of what they are doing.
13. In the UK, there is the start of a debate, most significantly through papers produced by the CDEI such as its Review into algorithmic decision-making (November 2020) to which we contributed. We commend the work of the CDEI in this field as a really valuable contribution to understanding the nature of these problems and providing important information as to how they should be approached.
14. In our view, the following forms of regulations would be invaluable so as to ensure meaningful transparency around the use of AI and related-technologies but also the creation of useful technology which respects fundamental principles like the principle of equality:
15. Most importantly, we believe that a “joined up” regulatory approach is required. AI and related technologies impact on numerous areas of the law e.g. data protected, equality, public law, privacy law. To date, there has been too much thinking in “silos” rather than bringing together best practice across all regulators and areas of law in order to provide authoritative guidance to organisations using new technology.
16. If the Committee would find it useful we are happy to give oral evidence to discuss any or all of these issues.
3 September 2021