Written Evidence Submitted by Joseph Alfieri

(GAI0062)

 

 

Executive summary

-          The current legislation regarding Artificial Intelligence (AI) is not suitable for the long term in ensuring that the use of AI is safe and ethical.

 

-          The Government suggestions outlined in both the AI national strategy and AI regulation paper overlooks ensuring safe design in favour of preventing dangers in its application, which may lead to unnecessary harm for the public.

 

-          The changes I propose centre around a central regulatory authority that can coordinate other regulators, liaise with AI organisations, oversee the cross-sectoral principles and issue certifications for AI developers.

 

  1. Introduction

 

1.1 I am entering this submission as a student of Politics at the University of Exeter who is interested in the future of AI in the UK. AI has the potential to radically transform society in unprecedented ways not since seen the industrial revolution, with a 10% increase in GDP already predicted by 2030[1]; if the UK is effective in governing AI, it could have transformative effects on the economy and society of the UK and overseas.

 

1.2 This submission shall look at 3 key areas of AI governance. Section 2 will look at the effectiveness of AI in its current form, section 3 will analyse the effectives of key governments recommendations from the AI National strategy and the AI regulation policy paper and Section 4 will look at recommendations for effective governance of AI.

 

2. Is the current governance of AI effective?

 

2.1 As it stands, the UK has a satisfactory approach to AI governance. Currently, there is no approach to regulating and governing AI itself, rather, “there is a patchwork of legal and regulatory requirements built for other purposes”[2]. This is also the approach many other nations have taken, such as Singapore, one of the very few nations to rank higher that the UK in AI preparedness[3]. Brazil seems to be one of the only countries to seriously consider legislating on AI so far, with a bill passing in the house of representatives that focuses on the framework and liability of AI, however this has garnered significant criticism as it goes against the traditional principles of Brazilian tort liability[4]. The EU has also proposed rules for the governance of AI; however, it concerns grouping AI based on dangers[5], whilst this is a sensible approach, the government is right to suggest that this approach has little utility for what the UK wishes to achieve[6], with the “context-specific approach”[7] being far more sensible for both innovation and safety.

 

2.2 For the short term 2.1 is a sensible approach, AI is such a rapidly developing technology and there is no universally accepted definition of what AI actually is[8]; the UK’s definition is very broad, and AI is a rapidly developing and transforming sector, meaning any legislation at this stage may fail to adequately regulate AI as its current definition[9] covers too many technologies and its definition can and most likely will change as technology develops. As a result, most laws regulating AI would be of limited utility in the future. As such the current situation is satisfactory in its current form, and many laws, such as Data protection act (2018)[10], adequately protect many rights of citizens that AI may infringe upon[11], such as the “(limited) right to explanation against “automated decision making”[12] and strong protections of private data.

 

3. Analysing the effectiveness of the government’s strategy

 

3.1 Through the governments National AI strategy and Ai regulation paper: ‘Establishing a pro innovation approach to AI the government have outlined their future approach to AI regulation. It consists of setting out “core characteristics of AI”[13], letting sector specific regulators set their own rules of AI on a non-statutory basis[14], ultimately regulating the usage of AI, rather than the technology itself”[15]. Naturally this does have the benefit of being highly flexible, seemingly prioritising innovation, however it does come with some key trade-offs outlined in this section.

 

3.2 Regulators won’t be experts in AI. The logic behind this approach is that regulators are best placed to identify and respond to the emerging risks through the increased use increased use of Ai”[16]; whilst regulators may be experts in their own jurisdictions, they can’t simultaneously be experts in AI and understand the complex dangers that may emerge in the future[17]. There is currently massive demand for AI experts, with 69% finding AI job vacancies hard to fill[18], and UK university’s experiencing a massive brain drain to the private sector affecting AI research[19]. As such, regulators suffer with large information asymmetriescompared to the companies that they will be regulating, there are simply not enough AI experts in the public sector[20], let alone enough to allow all sector specific regulators to effectively regulate AI and internalise any negative externalities. Due to the vast spread of different regulators, there will be a massive “knowledge deficit” as the private sector will provide better pay and job conditions, ultimately “slowing down and aggravating the process of regulation”[21].Thus, massive weakness with having decentralised regulation is the lack of expertise, which may result in sever flaws going through the cracks and unethical AI design.

 

3.3 The government has briefly considered the issues with their ‘context-specific approach’. they have suggest introducing cross sectoral principles[22] to guide the regulation of AI. Yet these principles are not legislation, they are merely guidelines, to which regulators have an ability to “interpret and implement”[23]. As a result, there may be a vast array of conflicting AI regulations across different sectors that may still hinder innovation due to regulators own interpretations of these guidelines leading to “inconsistent or contradictory approaches across sectors”[24]. This approach also fails to consider the issues highlighted in 3.2, which may lead to significant negative externalities and thus government failure.

 

 

3.5 Another issue is the concept that they aren’t regulating the AI technology itself, but its usage. The issues here are twofold. Firstly, as the strategy acknowledge, regulation could be framed “narrowly” around the current “cross cutting frameworks”[25], i.e. regulators may look to UK GDPR, and thus fail to conceptualise the multidimensional harms of AI. The paper also suggests that issues will always need to be dealt with “at the level of individual harms”, not on the specific AI technology itself, yet surely regulating on the design of the AI will aid in negating these harms, particularly when AI can teach itself, meaning individual harms may appear in the future as result of present negligence. This approach to regulation seems far to reactive, potentially allowing for harms to appear before they are dealt with[26], which seems like an irresponsible approach to regulation, particularly as regulators aren’t even supposed to deal with “hypothetical” risk,[27] under the current proposals.

 

4. How should AI be regulated?

 

4.1 Whilst there is little literature on “AI governance and regulation”[28], a strong recommendation would be a central regulatory AI authority[29]. I agree with the government that one single centralised agency for AI is not a good idea, there are simply too many use cases and sectors to regulate everything in detail, however I believe one regulator to oversee AI regulation in association with the sector specific approach would be highly beneficial for the future of AI governance. It would combat many of the issues with the government’s plan; it deals with the lack of abundance of AI experts, who can instead be situated in a centralised AI regulator rather than dispersed throughout a variety of regulators, resulting in a more robust approach to AI regulation. It could also hold an “AI board of ethics” [30], or take guidance from the Centre for Data Ethics and Innovation, that could oversee the ethical framework of AI and alter this on a cross sector or specific basis. It would also benefit from allowing the vast array of AI advisors and groups in the UK to liaise with a single regulator rather than an array of regulators. This regulator would also be best placed to suggest new laws and regulations to the government as they communicate both with regulators and the private sector, “with the best knowledge of both the legislative realm” and “organisations in the field of AI applications”[31]. Whilst this job may have been undertaken by the office for AI, a central regulator would likely be better at assessing the different needs of stakeholders and society as they are directly involved in the regulation process.

 

4.2 Scherer also proposes the concept of AI certification from this central authority, this would be voluntary, and could prevent many negative externalities as the AI regulator would be allowed to thoroughly ensure that an AI is safe and ethical. In exchange, firms would be liable to “limited tort liability”[32], which would act as a strong incentive for participation, whilst simultaneously fostering an environment of innovation as AI.

 

4.3 The government should also consider some legislation specifically on AI in the future. Cross cutting legislation should be implemented in the creation of AI to ensure human safety and ethical protocols such as ensuring human control of the AI[33] in a “human-out-the-loop[34] situation, such as in Singapore[35]. The USA have already created an AI bill of rights[36], there is little reason not to implement some basic principles that must be embedded within AI to ensure its safety for the public.

 

4.4 The government should also develop tools for AI developers to ensure the safety of their AI. For example, the US’s Defence Advanced Research projects agency (DARPA) is working on tools to aid in “explainable AI”[37]. Singapore has also introduced the “world’s first AI governance testing framework and toolkit” – A.I verify, which increases transparency between companies and the public[38], given Singapore has almost identical aims for its AI future, the UK should also invest in developing a greater toolkit for developers to ensure greater AI ‘robustness’ and thus greater public safety.

 

4.5 A key issue for the governments approach is that frameworks are voluntary. This issue can be negated by increasing public awareness and thus placing “extrinsic pressure”[39] on companies to volunteer for these frameworks. A strong example is that of Finland, who introduced “Elements of AI”[40], an initiative to get citizens acclimated to AI basics, as of 2021, an impressive 1% of the population have signed up[41]. As Rory Stewart points out[42], the UK must increase its national conversation on AI, something that must start at the top with an increase in political conversation. Greater public awareness of the benefits and harms of AI would create greater informational parity, and thus encourage safer design of AI within companies.

 

5. Conclusion

 

5.1 Whilst the current governance of AI is sufficient, it is not future proof. As AI develops the UK must foster a strong balance of both strong innovation and safe and ethical AI. The recommendations in section 4 aim to create better uniformity and coordination among regulators that are designed to ensure that the UK becomes the international standard for AI development, just as it is for its robust legal system, in hopes that strong and sensible governance will create a better future for the country and world.

 

(November 2022)

 

Bibliography

 

The White House. ‘Blueprint for an AI Bill of Rights | OSTP’, n.d. https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

Boesl, Dominik BO, and BA Martina Bode. ‘Technology Governance’, 421–25. IEEE, 2016.

Butcher, James, and Irakli Beridze. ‘What Is the State of Artificial Intelligence Governance Globally?’ The RUSI Journal 164, no. 5–6 (19 September 2019): 88–96. https://doi.org/10.1080/03071847.2019.1694260.

Dafoe, Allan. ‘AI Governance: A Research Agenda’. Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK 1442 (2018): 1443.

‘Elements of AI’. Accessed 17 November 2022. https://www.elementsofai.com/.

Elissa Strome, Claudia May Del Pozo, Gina Neff, Radu Puchiu, Abdijabar Mohamed, Fadi Salem, Raj Shekhar, Karthik Nachiappan, and Yaseen Ladak. ‘Government AI Readiness Index 2021’. Oxford Insights, 2021.

‘Establishing a Pro-Innovation Approach to Regulating AI’. HM government, 20 July 2022. https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement.

Gasser, Urs, and Virgilio AF Almeida. ‘A Layered Model for AI Governance’. IEEE Internet Computing 21, no. 6 (2017): 58–62.

Ian Sample. ‘Big Tech Firms’ AI Hiring Frenzy Leads to Brain Drain at UK Universities’. The Guardian, 2 November 2017, sec. Science. https://www.theguardian.com/science/2017/nov/02/big-tech-firms-google-ai-hiring-frenzy-brain-drain-uk-universities.

José Renato Laranjeira de Pereira and Thiago Guimarães Moraes. ‘Promoting Irresponsible AI: Lessons from a Brazilian Bill’. Heinrich Boll Stiftung Brussels Office - European Union, 14 February 2022. https://eu.boell.org/en/2022/02/14/promoting-irresponsible-ai-lessons-brazilian-bill.

‘Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Ammending Certian Union Legisilative Acts’. EUROPEAN COMMISSION, 21 April 2021. https://artificialintelligenceact.eu/the-act/.

‘Model Artificial Intelligence Governance Framework Second Edition’. Personal data protection commission Singapore, n.d. https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.ashx.

‘National AI Strategy’. HM government, September 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf.

‘PDPC | Singapore’s Approach to AI Governance’. Accessed 17 November 2022. https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework.

Perry, Brandon, and Risto Uuk. ‘AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk’. Big Data and Cognitive Computing 3, no. 2 (2019). https://doi.org/10.3390/bdcc3020026.

Rachel Free, Charles Kerrigan, and Barbara Zapisetskaya. ‘AI, Machine Learning & Big Data Laws and Regulations | United Kingdom | GLI’. Text. GLI - Global Legal Insights. Global Legal Group, 2022. United Kingdom. https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/united-kingdom.

Rahwan, Iyad. ‘Society-in-the-Loop: Programming the Algorithmic Social Contract’. Ethics and Information Technology 20, no. 1 (2018): 5–14.

Rory Stewart and Allister Campbell. ‘Women in Politics, AI Advancements and Airports’. The Rest Is Politics, n.d.

Scherer, Matthew U. ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’. Harv. JL & Tech. 29 (2015): 353.

‘The Economic Impact of Artificial Intelligence on the UK Economy’. PWC June 2017 (n.d.): 16.

The National Security and Investment Act 2021 (Notifiable Acquisition) (specification of qualifying entities) Regulations 2021, No. 2021 No, 1256 (n.d.).

Wirtz, Bernd W, Jan C Weyerer, and Benjamin J Sturm. ‘The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration’. International Journal of Public Administration 43, no. 9 (2020): 818–29.

 

 


[1] The Economic Impact of Artificial Intelligence on the UK Economy, PWC June 2017 (n.d.): 16. P4

[2] ‘Establishing a Pro-Innovation Approach to Regulating AI’ (HM government, 20 July 2022), https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement. P5

[3] Elissa Strome et al., Government AI Readiness Index 2021 (Oxford Insights, 2021). P61

[4] José Renato Laranjeira de Pereira and Thiago Guimarães Moraes, Promoting Irresponsible AI: Lessons from a Brazilian Bill, Heinrich Boll Stiftung Brussels Office - European Union, 14 February 2022, https://eu.boell.org/en/2022/02/14/promoting-irresponsible-ai-lessons-brazilian-bill.

[5] Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Ammending Certian Union Legisilative Acts (EUROPEAN COMMISSION, 21 April 2021), https://artificialintelligenceact.eu/the-act/.

[6] National Ai Strategy, 2021. P52

[7] Establishing a Pro-Innovation Approach to Regulating AI. P2, P11

[8] Urs Gasser and Virgilio AF Almeida, A Layered Model for AI Governance, IEEE Internet Computing 21, no. 6 (2017): 5862. P62

[9] The National Security and Investment Act 2021 (Notifiable Acquisition)( specification of qualifying entities) Regulations 2021, No. 2021 No, 1256 (n.d.).

[10] National Ai Strategy. P53

[11] Establishing a Pro-Innovation Approach to Regulating AI. P5

[12] Gasser and Almeida, A Layered Model for AI Governance. P59

[13] Establishing a Pro-Innovation Approach to Regulating AI. P8

[14] Ibid P2

[15] Ibid P8

[16] Ibid P11

[17] Matthew U Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, Harv. JL & Tech. 29 (2015): 353. P395

[18] National Ai Strategy.

[19] Big Tech Firms AI Hiring Frenzy Leads to Brain Drain at UK Universities, The Guardian, 2 November 2017, sec. Science, https://www.theguardian.com/science/2017/nov/02/big-tech-firms-google-ai-hiring-frenzy-brain-drain-uk-universities.

[20] Gasser and Almeida, A Layered Model for AI Governance. P59

[21] Bernd W Wirtz, Jan C Weyerer, and Benjamin J Sturm, The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration, International Journal of Public Administration 43, no. 9 (2020): 81829.

[22] Establishing a Pro-Innovation Approach to Regulating AI. P12

[23] Ibid P13

[24] National Ai Strategy. P52

[25] National Ai Strategy. P52

[26] Wirtz, Weyerer, and Sturm, The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.

[27] National Ai Strategy. P2

[28] Wirtz, Weyerer, and Sturm, The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.

[29] Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.

[30] Gasser and Almeida, A Layered Model for AI Governance.

[31] Wirtz, Weyerer, and Sturm, The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.

[32] Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. P393

[33] Scherer.

[34] Iyad Rahwan, Society-in-the-Loop: Programming the Algorithmic Social Contract, Ethics and Information Technology 20, no. 1 (2018): 514.

[35] Model Artificial Intelligence Governance Framework Second Edition (Personal data protection commission Singapore, n.d.), https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.ashx. P30

[36] Blueprint for an AI Bill of Rights | OSTP, The White House, n.d., https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

[37] James Butcher and Irakli Beridze, What Is the State of Artificial Intelligence Governance Globally?, The RUSI Journal 164, no. 56 (19 September 2019): 8896, https://doi.org/10.1080/03071847.2019.1694260.

[38] PDPC | Singapores Approach to AI Governance, accessed 17 November 2022, https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework.

[39] Dominik BO Boesl and BA Martina Bode, Technology Governance (2016 IEEE international conference on emerging technologies and innovative business practices for the transformation of societies (EmergiTech), IEEE, 2016), 42125.

[40] Elements of AI, https://www.elementsofai.com/.

[41] Elissa Strome et al., Government AI Readiness Index 2021. P21

[42] Rory Stewart and Allister Campbell, Women in Politics, AI Advancements and Airports, The Rest Is Politics, n.d.