WRITTEN EVIDENCE SUBMITTED BY BILETA

GAI0082

 

Given its expertise in Information Technology Law, the British and Irish Law Education Technology Association (BILETA) welcomes the opportunity to contribute to the UK Parliament Science and Technology Committee inquiry about Governance of artificial intelligence (AI).

BILETA was formed in April 1986 to promote, develop, and communicate high-quality research and knowledge on technology law and policy to organisations, governments, professionals, students, and the public. BILETA also promotes the use of and research into technology at all stages of education. 

Summary

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How effective is current governance of AI in the UK?  What are the current strengths and weaknesses of current arrangements, including for research? 

The current governance of AI in the UK is not very well developed, although there has been a number of consultations by the UK Intellectual Property Office (UKIPO) in September 2020 and October 2021.[1] Also more recently - on 22 September 2022 - the UKIPO published guidance on examining patent applications relating to AI inventions.[2] Furthermore, in 2021, the UK Court of Appeal upheld the High Court’s decision that an AI machine cannot be named as an inventor of a patent[3] thereby affirming the initial decision of the UKIPO. Similar rulings rejecting were also handed down by the European Patent Office, Germany’s Federal Patent Court, US, Patent and Trade Mark Office USPTO) and most recently (on 17 November 2022), Australia’s Federal Court.[4]

As such, there has been much work in this field and a clear response to the question of an AI machine being an inventor of a patent from the Courts. In delivering the judgement in the Court of Appeal, Lord Justice Arnold stated: “[We] must presently apply the law as it presently stands: this is not an occasion for debating what the law ought to be”. Whilst in agreement with the Court of Appeal’s decision that the ruling was not the time to debate the law as it presently stands, it does raise the question of governance - not only from the perspective of patents but also from the perspective of copyright - as discussed below.

Whilst the 2021 ruling (also known as the DABUS ruling) is clear that an AI machine cannot be an inventor, it continues to raise the point of whether the law should be updated - to clarify the position either way? The response to the UKIPO’s consultation in 2021 concluded that the law need not be updated at present, as AI technology is not sufficiently sophisticated to “invent” without human instruction. However, the UKIPO stated that the position will be kept under review depending on the speed of changes in AI technology. In this context, the UKIPO’s guidance on examining patent applications relating to AI inventions is a positive step in the correct direction. 

However, it raises the question of whether the law should be so reactive and continue to leave a gap in relation to governance, especially if the UK wishes to be placed as a leader in the field of AI technologies. The UKIPO has committed to taking an active role in advancing international discussions on harmonising AI inventorship provisions however at present there is a lack of governance in this area, except for the guidance provided by the UKIPO in September 2022 - exclusively for patents. 

It is recommended that the discussions on AI governance in relation to patents take place imminently to avoid reactive law-making leaving many industry and business stakeholders in an unclear situation. As mentioned in the Response below, there are other countries (such as USA) which are moving forward by taking positive steps in this field and it is timely for the UK to consider taking such steps too.

In relation to copyright law, the consultations raised questions as to whether computer-generated works (CGW) need updating particularly section 9(3) of the Copyright, Designs and Patents Act 1988 and the corresponding section 178. However, the responses clarified that it should not be updated, as the use of AI is still in its early stages and there is no evidence to suggest that allowing copyright protection for CGws is in any way harmful. Once again, the position will be kept under review. 

However, since the publication of the response, the emergence of ‘AI prompts’ such as DALLE-2, Midjourney and Stable Diffusion to name some, has caused much concern amongst creators/artists, who are of the view that their work is under threat due to the sudden emergence of this AI technology which can produce exceptional artwork simply by someone writing a simple prompt into the ‘prompt’ box on these platforms.[5] This does raise the question of CGWs once again of whether AI technologies are currently harmful to creators. However, there is literature establishing that creators should not worry about these tools - at least for now.[6]

This is similar to the patent situation where there is a lack of clear governance, either way. Yet at the same time, in September 2022, an artist in the USA received the first known copyright registration for latent diffusion AI art.[7] It raised questions as to whether the artist actually ‘created’ the artwork or whether it was through a prompt? It is very possible that there will be a court case on this question - and therefore in the UK too, it is recommended that appropriate steps be taken to deal with this ever-developing technology and consider how it can be enforced. ‘Waiting to see what happens’ is not the way forward as it does not provide an effective governance mechanism.

In terms of strengths, it is clear that whilst there is a lack of clarity in relation to the governance framework, the UKIPO has been proactive in publishing clear guidance on examining patent applications relating to AI inventions. In comparison to other countries, this is most definitely a positive step in the correct direction.

Secondly, following the consultations by the UKIPO, the response highlighted that a copyright exception for text and data mining will be introduced, which will be broad in nature but ensuring that rightsholders will have sufficient safeguards to protect their content, including a requirement for lawful access. Other countries such as Japan, Singapore have already done so as well as the EU - and again it is positive news that the UK will head in the same direction. This will also assist with research which is an important area in relation to AI.

At the same time, it is recommended that serious thought is given to providing clarity around patents and inventorship as well as copyright and computer-generated works. Currently there appears to be a ‘vacuum’ in terms of governance with many debates taking place on the topic, but, lacking a clear governance framework, for business, industry, and research stakeholders to abide by.

What measures could make the use of AI more transparent and explainable to the public? 

When it comes to measures, which could make the use of AI more transparent and explainable, private sector corporations and governments should be explicit with the public concerning the decisions, which are based on automated systems, decisions, which are supplemented with human review, as well as the rationale deployed by AI systems. Moreover, the public should also be made aware when the personal information they give to a corporation either expressly or via their terms and conditions will be added to a dataset used by an AI, to allow the public to factor that understanding into their decision as to whether to authorise data gathering and which categories of data they want to communicate. For example, in a similar way to the notices needed for operating CCTV cameras, AI systems should actively communicate to the public via innovative means such as pop-up notifications in a comprehensible and clear way that they are contributing data or are subject to an AI decision-making process, as well as the significance of the impact to the public and meaningful information concerning the rationale behind the decision.[8]  To that effect, see Article 22 of the General Data Protection Regulation.

Moreover, to become more explainable and transparent, the public should also have access to remedies for the adversarial effects of AI systems. In this regard, private sector corporations should also provide human review and remedy avenues responding to complaints and appeals from the public in a timely manner. Information on how often an AI system is subject to appeals and remedy requests, along with the kinds of available remedies and their effectiveness, should be regularly published. Private sector corporations should also publish information on removals, including how frequently these removals are challenged and confirmed, in addition to information on trends in content display, as well as education and case studies.[9] This would ensure the compatibility of AI systems with the right to a fair trial under Article 6 of the European Convention on Human Rights.

Arguably, transparency does not have to be complicated as simplified explanations of inputs, outputs, rationale, and policies of AI systems can be effective and instrumental in contributing to public debate and education.[10] Moreover, private sector corporations should endeavour to make AI more transparent and explainable through the provision of non-technical understanding into systems. This is as opposed to trying to make convoluted technical procedures legible to the public. In this regard, the focus should be on educating the public concerning the AI system’s existence, rationale, design, and effect, instead of focusing on the source code, inputs, outputs, and training data.[11] Additional measures, which could lead to the deployment of AI being more transparent and explainable to the public include the carrying out of human rights impact assessments and public consultations during the design and use of new AI systems. The outcomes of these human rights impact assessments and public consultations should also be made public. Similarly, apart from regulatory frameworks, private sector corporations should also make all AI code entirely auditable and should support novel means for allowing independent and external auditing of AI systems. The outcomes of these audits should again be made public.[12]

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?  Are current options for challenging the use of AI adequate and, if not, how can they be improved?

 

The current options for challenging the use of AI are most certainly inadequate and should be improved.

 

As noted in our response to the previous question, private sector corporations and governments should be explicit with the public concerning the decisions, which are based on automated systems, decisions, are supplemented with human review, as well as the rationale deployed by AI systems. Below in the response, we recommend for AI-enabled decision making for the public sector a clear legal framework, based on primary legislation, and taking cognisance of the UK’s human rights obligations, that defines key parameters for the conditions under which automated or semi-automated decision-making by AIs is permissible, including processes of contestability and remedies. Also, as noted above, the public should have access to remedies for the adversarial effects of AI systems and there need be measures to ensure the compatibility of AI systems with the right to a fair trial under Article 6 of the ECHR.

 

Some examples of measures to ensure scrutiny of AI systems include: a socio-technical audit as “an end-to-end inquiry into how a system works.” The audit would document reasons for the decision taken by the system, inter alia. The audits should be adversarial in nature, performed by people outside of the system, by exploiting the possibilities of the system to be reverse-engineered to avoid the audits to be used as a tick-box exercise.[13]

 It is important to remember that auditing alone cannot solve all the issues bound up in the operation of AI, and that even the best audits cannot resolve issues with systems that are inherently harmful to people or groups in society. Those systems should be just banned from the outset.

 

Additionally, it is important that public sector agencies are duly empowered to inspect the technologies they’re procuring and are not prevented from doing so by the intellectual property rights. Public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias.[14]

 Finally, we endorse the Recommendation 16 by the Centre for Data Ethics and Innovation to government: ‘Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use  an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.’[15] We believe this framework needs to be set out in the legislation.

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

A coherent framework for regulation of AI is complex given the broad variety of uses that are made of it, from medical research to energy policy, animal welfare to property ownership.[16] It is vital that there is clarity on how AI is to be controlled, as digital innovation is a key part of the UK Innovation Strategy.[17] A lack of clarity on the applicable rules and regulatory bodies has been highlighted as a key factor in the need to update the law in this area.[18]

The government’s stated aim is for regulation that is “proportionate, light-touch and forward-looking”,[19] using a “pro-innovation, light-touch and coherent regulatory framework, which creates clarity for businesses and drives new investment”.[20]  The call for slight and unintrusive control is a popular view of the application of regulation to technical and innovative markets, where change and innovation can happen quickly. There is a fear that governmental control of AI could “limit or slow down development”.[21] While sympathetic to the potential needs of the free market and innovation, there are sufficient issues with how AI is impacting society that a call for “permissionless innovation”[22] cannot be supported.

As discussed earlier in this Response, the major concern about AI for citizens is the way it is being used in relation to privacy and discrimination. The lack of transparency regarding how AI systems are using personal data to make important automated decisions in relation to employment, privacy, and other social issues is a significant problem,[23] which requires clear regulation to control.  The current regulation of AI in a piecemeal fashion, using data protection laws, the Online Safety Bill, equality laws and rules from the financial services and medical research fields is not fit for purpose. 

Building on the recommendations given above in this Response, it can also be argued that more could be done in regulating this area. There could be more focus on governmental oversight (including potential bans) on certain types of high-risk applications of AI.[24] This would follow the European approach, which is a “context driven and fragmented puzzle”, prominently focused on risk.[25] The EU Proposal for a Regulation on Artificial Intelligence[26] provides a framework for differing types of regulation depending on the way AI is being used and focuses the issue specifically within product safety rules or privacy. There are outright prohibitions for systems with risk levels to society too high to be acceptable (where the use of AI would be for products or services that would be highly manipulative or highly invasive of citizens’ privacy such as biometric identification systems), while other AI-using providers would need to establish risk management systems and provide an audit trail to prove they meet obligations for accuracy and security. Those who use AI to provide products or services that have minimal risks for citizens would merely need to meet voluntary codes of conduct.

This proposed regulation, and the general EU system of rules in this area, has been criticised as insufficiently detailed as it fails to cover the entire spectrum of uses of AI, and is lacking in granularity.[27] However, it has clear merit in its focus on risk, and requirement for risk to be managed centrally rather than by individual producers who use AI-driven technology. The current UK proposals for AI control puts initial responsibility for determining risk on the producers using AI technology, with a requirement for a named legal person in all companies using AI to assume legal liability. This has been successful in areas such as data protection but may not be as practical in the area of AI, where decisions made can be less clear and easy to review. 

As such, we recommend that there needs to be a clearer set of rules to ensure the risks of the use of machine learning and AI is mitigated, and there is a clear sense of governmental control of high-risk applications of AI.

There is also clear need for a straightforward set of standards for the use of AI as adapted and defined by related industry leaders and groups.[28]  Currently, there is a patchwork of industry standards as designated by groups such as the Digital Regulation Cooperation Forum, Artificial Intelligence Public-Private Forum, Central Digital and Data Office, and Centre for Data Ethics and Innovation. These bodies should remain a central focus of the provision of regulatory oversight, as they contain industry leading expertise in their areas. However, there is a risk that gaps are forming, and the way rules are applied will differ in relation to way AI is being used in different industries. As such, there needs to be clarity in the way these bodies will work together. While the soft-touch approach of government guidance can be praised, without firm mandatory obligations it could be argued there is a potential risk of lack of coherence. 

The recommended use of the Digital Regulation Cooperation Forum as the promoted body for oversight here has some benefits due to its composition and strong goals and objectives in this field.[29] However, it does not meet the requirements already discussed above for a clear method for remedies[30] and complaints by affected individuals.

Another recommendation is for the development of the role of the DRCF (or an alternative agency or body) to provide a strong mechanism for challenges to the use of decisions in this area as well as potentially providing a strong set of standards for the use of AI.[31] To follow the Government’s stated desire for context specific regulation, the membership of this regulatory body should include a broad range of experts from all industries using AI and machine learning technology.  It should also include the current regulators such as the ICO, Ofcom, Medicine and Healthcare Regulatory Authority, and Equality and Human Rights Commission. Clear standards from a body such as this would lower barriers to entry to the market as it would lower concerns about the application of legal liability, while also permitting for innovation as the standards would be adaptable. They would be written and approved by those in industry, meaning they would also be contextually specific, as the Government as requested.

The above recommendations for the regulation of the use of AI should ensure that current concerns are met and permit for effective government of AI in the UK while also promoting innovation. They are proportionate and should permit for more trust in the use of technology by the public without stifling innovation.

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?  Is more legislation or better guidance required?  What lessons, if any, can the UK learn from other countries on AI governance?

In 1998, the UK became as quite possibly the first country to legislate explicitly for the use of AI (as it was then understood) to support decision making in the public sector, in Part 1 sec 2 of the Social Security Act 1998. This landmark piece of legislation did not just ensure that concerns about the accuracy, correctness, and contestability of automated or semi-automated decision making were addressed, but the open debate also allowed issues to be voiced that went beyond the mere factual correctness of the decisions reached. Conservative MPs, in opposition at the time, raised pertinent issues about digital exclusion, and the symbolic significance to deny the weakest members of society a “human ear”, and to further disenfranchise them even if there were improvements in speed, consistency and correctness of administrative decision making. Since then, the use of AI to support or fully automate decision making has proliferated, though specific enabling legislation such as the SSA remain the exception. In comparison to the debate in 1998, current discussions around AI legislation current legislative initiatives, be it the proposed EU AI Act or the UK’s  “pro-innovation approach to regulating AI” narrow down the debate to assure that AI systems display as much as possible some desirable technical characteristics (correctness, bias-avoidance, explainability) but leave these wider concerns of the impact of AI on wider society, human dignity and the rule of law outside their scope. By focussing on those aspects of AI that are in principle amenable to programming solutions (more representative data, better analytical tools), focus also shifts from difficult normative decisions to methodologies of certification of technical standards by regulatory bodies. This attitude also finds its expression current legislative proposals such as the EU AI Act and the UK “pro innovation framework” for AI governance. The former delegates significant rule-making powers to democratically unaccountable standard setting bodies which have neither the expertise nor the remit to consider in depth civil rights implications of technologies. The UK approach is currently less detailed, but it too proposes to delegate rule making away from Parliament to sectorial regulators and agencies. While the involvement of technological experts in such a complex and rapidly evolving field is necessary and to a degree desirable, the experience with the earlier legislation for the use of AI by the public sector indicates that there are considerable benefits in primary legislation that establishes clear parameters for the use of the technology. Important normative decisions that balance conflicting legitimate interests (just how accurate does a system have to be for a given application, can benefits for one group offset disadvantages for another etc) require democratic accountability and debate. It is not just the AI systems that have to be transparent – the process of regulating AI needs to be transparent too, establish clear lines of responsibility, and be subject to public debate. Nor is it just a question of the quality of legislation – the parliamentary process itself creates transparency and accountability.

We recommend therefore for AI enabled decision making for the public sector a clear legal framework, based on primary legislation, and taking cognisance of the UK’s human rights obligations, that defines key parameters for the conditions under which automated or semi-automated decision making by AIs is permissible, including processes of contestability and remedies.

Legislating the use of AI is despite the early precursor in the UK a nascent field, and it is therefore difficult to find clear empirical evidence of what works and what does not work. Closest match with some track record, for personal data only, is arguably the GDPR’s Art 22. However, evidence of enforcement action on Art 22 is scarce, and one of the substantial provisions least known by the public.[32] Overall, there is a known enforcement deficit of the GDPR.[33] Any AI regulation worth enacting therefore will require appropriate resources, also to hire the right type of expertise. On the other hand, the “design centric” focus of the GDPR has for the last four years shaped the way UK businesses structure their software architectures. National legislation, be it as a reform of data protection law, be it through AI-specific regulation, that now lowers these standards is likely to lead to confusion, uncertainty, and disadvantages in international markets. Lower standards for AI applications could also jeopardise the GDPR adequacy finding on which key UK industries depend.

It should also be noted that both the EU (through the EU AI Act, for all its flaws) and the US (with the Algorithmic Accountability Act) are moving towards more stringent regulation. This does not just reflect concerns about these technologies, they are also the result of the recognition that market acceptance requires levels of public trust that market mechanisms alone can’t generate. There is no necessary conflict between regulation through legislation and innovation. For the UK, this also means that the likely “Brussels Effect” and the “Silicon Valley Effect” will mean that the US and EU models are likely to become de facto global standards. An export-oriented UK software industry will have to work towards these standards in any case, with little benefits to develop different approaches for the domestic market.

 

 

 

 

 

(November 2022)


[1] UK intellectual Property Office, Artificial Intelligence and intellectual property: call for views (September 2020) at https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views

[2] UK intellectual Property Office, Artificial Intelligence and IP: copyright and patents (October 2021) at https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents; UK intellectual Property Office, Guidance on examining patent applications relating to AI inventions (September 2022) at https://www.gov.uk/government/publications/examining-patent-applications-relating-to-artificial-intelligence-ai-inventions/examining-patent-applications-relating-to-artificial-intelligence-ai-inventions-the-guidance

[3] Thaler v Comptroller General of Patents Trade Marks And Designs [2021] EWCA Civ 1374

[4] Commissioner of Patents v Thaler [2022] FCAFC 62 (judgement is available here)

[5] A Islam, how do DALLE-2, Stable Diffusion and Midjourney work? (14 November 2022) at https://medium.com/mlearning-ai/dall-e-2-vs-midjourney-vs-stable-diffusion-8eb9eb7d20be

[6] S Marche, We are witnessing the birth of a new artistic medium: expect AI art to go the way of Warhol (September 2022), The Atlantic at https://www.theatlantic.com/technology/archive/2022/09/ai-art-generators-future/671568/

[7] B Edwards, Artist receives the first known copyright registration for latent diffusion AI art (September 2022) https://arstechnica.com/information-technology/2022/09/artist-receives-first-known-us-copyright-registration-for-generative-ai-art/

[8] https://documents-dds-ny.un.org/doc/UNDOC/GEN/N18/270/42/PDF/N1827042.pdf?OpenElement

[9] Ibid.

[10] Aaron Rieke, Miranda Bogen and David Robinson, “Public scrutiny of automated decisions: early lessons and emerging methods” (Omidyar and Upturn, 2018), p. 5

[11] Rieke, Bogen and Robinson, “Public scrutiny of automated decisions”, p. 8

[12] https://documents-dds-ny.un.org/doc/UNDOC/GEN/N18/270/42/PDF/N1827042.pdf?OpenElement

[13] G. Galdon Clavell, M. Martín Zamorano, C. Castillo, O. Smith, and A. Matic. Auditing algorithms: On lessons learned and the risks of data minimization. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 265–271, 2020.

[14] Sandra Wachter, Justice and Home Affairs Committee Corrected oral evidence: New technologies and the application of the law Monday 19 October 202, https://committees.parliament.uk/oralevidence/2882/pdf/

[15] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making November 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957259/Review_into_bias_in_algorithmic_decision-making.pdf  

[16] Gov.uk, ‘Policy Paper: Establishing a pro-innovation approach to regulating AI’ (Gov.uk) <https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement> accessed 16/11/22

[17] Department for Business, Energy, and Industrial Strategy, ‘UK Innovation Strategy: leading the future by creating it’ (Gov.uk) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1009577/uk-innovation-strategy.pdf> accessed 16/11/22

[18] Gov.uk, n16 above. Other concerns were overlapping remits for existing regulatory bodies, inconsistencies, and gaps in regulations.

[19] Rt Hon Nadine Dorries, Ministerial foreword by the Secretary of State for Digital, Culture, Media and Sport, ‘Policy Paper: Establishing a pro-innovation approach to regulating AI’, n16 above.

[20] Rt Hon Kwasi Kwarteng, Ministerial foreword by the Secretary of State for Business, Energy and Industrial Strategy, ‘Policy Paper: Establishing a pro-innovation approach to regulating AI’, n16 above. The aim is to align the regulation with the principles in the Better Regulation Framework, and the Plan for Digital Regulation

[21] Etzioni, A., & Etzioni, O., Should Artificial Intelligence be Regulated? (2017) Issues in Science and Technology 33(4), 33

[22] O’Sullivan, A., & Thierer, A., Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation, (2018) Communications of the ACM 61(12) 33

[23] Dignam, A., Artificial Intelligence, tech corporate governance and the public interest regulatory response (2020) Cambridge Journal of Regions, Economy and Society 13(1) 37; Floridi, L., et al, AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations (2018) Minds and Machines 28(4) 689

[24] Such as fully autonomous passenger flights (Clarke, R., Regulatory Alternatives for AI (2019) Computer Law & Security Review 35(4) 398)  or autonomous weapons systems (Floridi et al, above)

[25] Folberth, A., Jahnel, J., Bareis, J., Orwat, C., and Wadephul, C., Tackling problems, harvesting benefits – a systematic review of the regulatory debate around AI, KIT Scientific Working Papers 197 (2022) 3,8

[26] European Commission, Proposal 2021/0106 (COD) of the 21.4.2014 for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, European Commission, Brussels:EU

[27] Gov.uk, ‘Policy Paper: Establishing a pro-innovation approach to regulating AI’ (Gov.uk) <https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement>

[28] Bannister, F., and Connolly, R., Administration by Algorithm: a risk management framework (2020) Information Polity 25(4) 471, Brand, DJ., Algorithmic Decision Making and the Law (2020) eJournal of eDemocracy and Open Government 12(1) 115

[29] Competition and Markets Authority, DRCF Terms of Reference, (Gov.uk) <https://www.gov.uk/government/publications/drcf-terms-of-reference/terms-of-reference#goals-and-objectives> accessed 16/11/22

[30] And in literature such as Allen, R. and Masters, D., Artificial Intelligence: the right to protection from discrimination caused by algorithms, machine learning and automated decision making (2020) ERA Forum 20(4) 585 and Bannister and Connolly, above.

[31]  It has also been suggested that these standards could be used by the regulator for pre-market approval of certain uses of AI (see Bannister and Connolly, above and Smith, RA. & Desrochers, PR, Should algorithms be regulated by government (2020) Canadian Public Administration 63(4) 563) although this may delay the release of products on the market and have an impact on innovation.

[32] Rughiniș, Răzvan, et al. "From social netizens to data citizens: Variations of GDPR awareness in 28 European countries." Computer Law & Security Review 42 (2021): 105585.

[33] Streinz, Thomas. "The Evolution of European Data Law." The Evolution of EU Law (OUP, 3rd edn 2021) (2021): 902-936.