Written Evidence Submitted by Professor Keeley Crockett, Manchester Metropolitan University, UK



Keeley Crockett is a Professor of Computational intelligence at Manchester Metropolitan Univeristy. She currently leads the Machine Intelligence and Data and AI Ethics themes in the Centre for Advanced Computational Science.  Her research interests include the ethics of Artificial Intelligence (AI), fuzzy systems, psychological profiling using AI, fuzzy natural language processing, semantic similarity, conversational agents and intelligent tutoring systems. She led work on Place based practical Artificial Intelligence, facilitating a parliamentary inquiry with Policy Connect and the All-Party Parliamentary Group on Data Analytics leading to the inquiry report Our Place Our Data: Involving Local People in Data and AI-Based Recovery (1/4/2021).

Why Submitting evidence?

I am personally submitting evidence due to my ongoing work with a)small businesses developing AI solutions b) delivering workshops on AI ethics to business c) The Turing and EPSRC funded research projects on building citizen trust in AI through public engagement and co-production d) Participation in policy connect round tables e) Delivering new material in AI ethics including governance to UG/PG students f)Founding member of the Greater Manchester Responsible Tech Collective g)Academic Co-lead (MMU) of the GM AI Foundry project h) Global work with IEEE.

  1. How effective is current governance of AI in the UK?


1.1.   In referring to Article 22 of the GDPR (2018), there is still insufficient knowledge of this article from SMEs working in the AI/digital/ data space. The requirements for automated decision making is not well defined, ambiguous from a legal perspective and is not specific enough for SMEs to follow. Through delivering data and AI ethics workshops to SMES as part of the  ERDF £6m GM Artificial Intelligence Foundry programme (69 to-date in Greater Manchester) – upskilling in the impact of ethical and legal aspects of AI is fundamental to staying on track with upcoming legislation (e.g. EU AI Act) and being a head of the curve in preparedness. SMEs perspective can be found in our 2022 case study booklet Opportunities and challenges for ethical AI Greater Manchester SME perspectives on current practices and future regulation download -Greater Manchester SME perspectives on current practices and future regulation of artificial intelligence (mmu.ac.uk)

1.2.   The ICO / Turing Explaining AI Decisions Guidance is comprehensive and detailed – but there is no practical complete example of where this guidance has been applied within SMEs. The guidance is long and feedback we received is that it is difficult to navigate and case studies are needed.

1.3.   Citizens are also not aware of the remit of Article 22, if they are interacting with an automated decision-making system that has a significant effect on them, it Is not clear to the average citizen, what their rights are – in terms of asking for an appropriate explanation. In a longitude study first published in [1], 64% of people had not heard of Article 22 of the GDPR and 64% of the sample size were over 40 years old.


  1. What are the current strengths and weaknesses of current arrangements, including for research?


2.1   Universities in the UK aim to fully comply with the UK Concordat to Support Research Integrity. Manchester Metropolitan Univeristy has a beyond compliance ethics, and an ethical approval system (ETHOS) for all research projects, which covers additional questions for the use of Artificial Intelligence. Public sectors bodies should lead by example when using data for analytics/mining or machine learning to construct models for classification. Terms and conditions of data usage need to be very clear to participants. This is an excellent example from the NHS (Nov-22)  where the definition of AI and its usage is relatively clear - Artificial Intelligence - NHS Transformation Directorate (england.nhs.uk). However, in this case, do patients know about how their data may be used (implied consent) unless they explicitly ask? Does the Health Research Authority’s Confidentiality Advisory Group (CAG) understand how to assess the usage of data for a specific AI research study? (For example – has a consequence scanning / harms modelling exercise been undertaken?)


  1. What measures could make the use of AI more transparent and explainable to the public?


3.1   Upskilling and education – reaching all citizens (including marginalised communities) is essential to build trust in AI. My vision is to have a series of online courses suitable for all audiences from age 14 upwards that cover the fundamentals of data, data driven tech and AI, plus the chance to take part in ethical debates about such technology. Existing courses such as Ethics of AI (mooc.fi) – open to citizens of Helsinki are suitable only for people who have the entry requirements for univeristy – which has limited population reach and impact

3.1.1         The over 60s (the lost digital generation) are exposed to data and/or AI driven systems on a daily basis and are being in some cases denied services (i.e. bank closures restricting access to online banking, car parking machines not accepting cash but payment on an app etc) as they do not have the digital literacy skills, the desire to use such systems and face more isolation through not have the ability to ask for help and communicate with a human. In the drive for efficiencies and savings, building trust also means taking a step and providing viable alternatives for this group of people. Community centres are great places for this age group to meet and receive digital support – but this is often run by a volunteer network and may not be consistently delivered. Businesses should be mindful and set aside provision to support this group of people on their digital journey.

3.1.2         Data and AI ethics should be a fundamental part of the curriculum from Key stage 2 upwards (similar to online safety). There are many mechanisms for doing this  - for example through the Hour of Code initiative such as AI for Oceans | Code.org.

3.1.3         Data and AI Ethics should be embedded into Univeristy Computer Science / Data Science / AI courses within the UK. Knowledge of data privacy, representative data sets, bias, fairness, interpretability, explainability, accountability and responsibility, cultural, moral, societal and environmental impact among UG and PG students should not be assumed from an interdisciplinary and domain perspective. At Manchester Metropolitan Univeristy we have dedicated modules on Data Management and Governance (UG/PG) and The Ethics of AI embedded into a range of computer science programmes.

3.1.4         Online courses only have limited reach and may not target marginalised groups of citizens, those who struggle with digital literacy and those in digital poverty. Co-design and co-produced courses (on the themes of data and AI ethics) should be developed with communities and groups on how best to build trust in AI and then delivered those communities. Funding would be needed, but the community resource could be shared across community centres and libraries in the UK.

3.1.5         Example 1 of Building Trust - The Alan Turing Institute, People-powered AI: responsible research and innovation through community ideation and involvement Grant. (2022) Through this project we want to give people a voice in how technology shapes their everyday lives. The aim is for members of the Peoples Panel for AI will feel more confident in their understanding of data and AI. We also hope they will feel empowered to question scientists and innovators about the ethics of their research and innovations. In this grant, we engaged through roadshows, two community groups with high social deprivation in the Greater Manchester area, Citizens volunteered (and were compensated) to undergo two days of training both in their communities and in Manchester Metropolitan Univeristy on the basics of AI and Data (ethics) and techniques such as Consequence Scanning – an agile practice for responsible innovators – doteveryone and Microsoft Harms modelling. In October 2022, we ran our first 4 live panels with 2 business and early career researchers where they pitched their AI Idea/ potential AI product / service).  EDI monitoring data for Panellist included 50% female and 50% over the age 65%. Evaluation report is due 15th December (part 1) and March 2023 (part 2). The launch event is on the 14th December 2022  details here - Launch event - Greater Manchester Peoples Panel on Artificial Intelligence Tickets, Wed 14 Dec 2022 at 16:00 | Eventbrite


3.1.6         Example 2 of Building Public Trust - EPSRC standard Grant – “PUBLIC  ENGAGEMENT - PEAs in Pods: Co-production of community based public engagement for data and AI research.” (3-yrs, start-date September 2022). The ambition is to empower the Greater Manchester R&D community to engage meaningfully with traditionally marginalised communities and embed co-production methods into individual and institutional research processes and governance. This will be achieved by

          Increasing the public engagement and coproduction skills and confidence of researchers in universities through training, reflective mentoring and “learning-by-doing

          Increasing knowledge about data-driven technology and AI research among community participants and hence create the conditions for community members to participate as active stakeholders in research and design processes.

          Demonstrating the benefits of coproduction methods to the Greater Manchester R&D community, as a powerful way to align research to ethical principles and real-world societal needs, especially those of traditionally marginalized communities.

          Nurturing sustained relationships between PEAs, research institutions and traditionally marginalized communities and embed such interactions into institutional research processes


3.1.7         Co-design and co-production of AI and data driven solutions with a representative group of citizens can be used to measure impact and support a theory of culture change in building trust of AI. However, physical reach is required in communities to engage them and give them a voice.



  1. How should decisions involving AI be reviewed and scrutinised in both public and private sectors?


4.1   The Digital Regulation Cooperation Forum is a positive step change in bringing together industry regulators. In the context of AI, specifically the Algorithmic Processing workstreams consultation on Auditing algorithms reported (Sept-22) that SMES should be consulted in how the DRCF should be regulated. This also implies in the reviewing and scrutinising of AI decision. Ethics Advisory boards should have their remit extended to be able to undertake reviews and keep abreast of regulation changes. The results of reviews should feed back into the AI development lifecycle and in acted upon, e.g., review of unfair, bias decision making may result in scrutiny of the data used to build the model and thus the refresh cycle of retraining and validation of the model(s) could be changed.


  1. Are current options for challenging the use of AI adequate and, if not, how can they be improved?


5.1   Citizens are more likely to know how to obtain a freedom of information (FOI) request and a subject access request (SAR), than know who to challenge an AI decision. The ICO website contains information with regards to data protection complaints etc but no specific information on how to complain about an AI decision. The FCA is similar.  Options for challenging AI decision need to be clearly communicated to the public through a variety of forums on a range of media. The question is who will they complain to?  Citizens may also not have the necessary skills and knowledge about whether AI was being used in a specific service/product.

5.2   Organisations that adopt the UK Gov Guidance - Ethics, Transparency and Accountability Framework for Automated Decision-Making which describes good practice for public body decision makers to understand automated or algorithmic decision-making. Practical steps involve engaging with legal advisors to comply with legalisation including Article 22 and the need to continuously monitor the algorithm and system. However more important the practical steps highlight the need to “Provide clear guidance on how to challenge a decision and incorporate accessible end user feedback loops for continuous learning and improvement.”, “Appoint an accountable officer to respond to citizen queries in real time” etc – There is no visible public evidence and metrics (to my knowledge) that this guidance has been evaluated and what the evidence of impact is? This can be improved by ensuring that all stakeholders (i.e., the decision makers, the data science / ML team of developers, third parties etc are adequately trained in ethical AI and understand the impact of not understanding and considering bias, fairness, explainability, interpretability, accountability etc). The case studies that are included in the guidance are to help organisations to “recognise if their project involves automated assisted decision-making”. It does not contain examples of real-world implementation with the public sector. 

5.3   Guidance and regulation on AI for private and public sector use of automated decision-making systems should not be different. Responsibility for responding to challenges of AI systems between 3rd party providers of AI as a service and public/private sector needs to be clearly defined.  The DCRF needs to make available clear messaging on how citizens can challenge the use of AI. For example, a counterfactual explanation request, like the way a FOI/SAR operates. However, to work it would need to be part of future legalisation, but this will cost in terms of money, time and the skills required to generate such explanations.



  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?


6.1   In the UK, AI should not be regulated according to the EU AI ACT (2023) but have its own regulation which encompasses UNESCO (Recommendation on the Ethics of Artificial Intelligence - UNESCO Digital Library) and adopts a risk based approach. The EU’s classification of AI risk is a useful starting point.  Evidence: Within the  ERDF £6m GM Artificial Intelligence Foundry programme, the four partner universities Manchester Metropolitan Univeristy (Lead), University of Manchester, Salford Univeristy and Lancaster University, developed and implemented an AI risk assessment for all SMES who wanted to progress to a stage two technical assist (building a prototype AI solution). This was a beneficial exercise as businesses, technical teams, and academics participated in consequence scanning to ensure risks could be mitigated.

6.2   Regulatory overarching oversight of AI should be separate from the existing DCRF as there are many common issues (e.g., data bias, model representation and fairness etc) across all domains.  Given the work of the ICO with the Turing Institute, and the fact that data is at the heart of model generation, the ICO currently seems in the best position to provide this overarching oversight, However, each regulatory body will have own its domain expertise which will be needed.


  1. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?


7.1   The legal framework is still very fragmented. There are hundreds of frameworks, guidelines and standards and toolkits[2], which focus on the technologies rather than organisational processes and human behaviours; have limited recommendations on implementation (especially for SMEs); do not translate concepts into concrete action; provide no mechanisms for accountability and compliance (audit); and ignore the benefits of co-production and public scrutiny. There is still a significant gap between top-down theory and practical adoption of robust ethical practices across the AI value chain.

7.2   The current legal framework is guided by AI case law (usually surrounding privacy infringements of data) with limited lawyers being trained to understand AI and its impact (E.g. a automated /semi-automated product/service not capable of providing an explanation understandable by the user, the consequences of a wrong decision on a user with no mechanism to speak to a human).


  1. Is more legislation or better guidance required?


8.1   Guidance needs to be consistent across domains and not fragmented. Guidance is not enforceable, and its implementation is not audited. Businesses need to see the benefits of investing in developing ethical AI, i.e., having both an AI and Data Governance Framework within the business with a set of well-defined standard operating procedures, versus the cost (time, money (upskilling) and resources). I believe UK legislation is required which is not as rigid as the forthcoming EU AI Act and still allows the UK to be ethical and internationally competitive within the AI product / service industry.

8.2   Citizens should be clearly signposted on where to go if they have a concern about an AI decision and not be fobbed off.  However, the caveat to this is that groups of citizens (e.g., those in digital poverty, suffering from digital literacy, marginalised communities, and people over 60, may not know what AI is.)


  1. What lessons, if any, can the UK learn from other countries on AI governance?


9.1   The EU AI Act whilst protecting humans, adds levels of bureaucracy that may stifle innovation. Individuals need to be protected, know their rights with regards to AI decision making, have the right to explainable decisions and be able to communicate with a human. Reasonable adjustments need to be made to organisations developing automated products and services to allow this to happen.

9.2   UK AI Regulation is essential. but there needs to be training and resource for SMEs to be able to comply.



(November 2022)

[1] K. Crockett, E. Colyer and A. Latham, "The Ethical Landscape of Data and Artificial Intelligence: Citizen Perspectives," 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, pp. 1-9, doi: 10.1109/SSCI50451.2021.9660153


[2] K. A. Crockett, L. Gerber, A. Latham and E. Colyer, "Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses," in IEEE Transactions on Artificial Intelligence, doi: 10.1109/TAI.2021.3137091.