Written Evidence Submitted by TechWorks

(GAI0068)

Executive Summary

  1. The whole AI area and its governance is still in the research stage and changing fast and will affect all sectors of the economy.  This is the right time to look at what will be needed as the research matures.
  2. Central guidance, coordination and funding of the many existing bodies working on governance will be critical to effective progress.
  3. Improving education and understanding within all levels of the corporate sector, particularly at executive and board level is absolutely essential to UK success.

Background: TechWorks

TechWorks is an established industry association at the core of the UK deep tech community with an ambition to harness our fantastic engineering and innovation to develop the UK’s position as a global technology super-power.

Our mission is to strengthen the UK’s deep tech capabilities as a global leader of future technologies. To do this we form adjacent connected communities that are influential in defining and shaping the advancements of industry – providing a platform to help our members strategically leverage products and services to drive profitable growth.

As an example of a connected community, TechWorks established the Internet of Things Security Foundation (IoTSF) as a non-profit, global membership organisation formed in 2015 as a response to existing and emerging threats in vertical applications of the Internet of Things. IoTSF is now a world-leading IoT security association and expert authority in IoT security and the natural home for IoT product and service vendors, equipment suppliers, network providers, technology adopters, researchers and industry professionals. We aid industry in making it safe to connect through a dedicated program of projects, guidance, reports, events, training, standards and advocacy.

TechWorks also has established connected networks in automotive (AESIN), manufacturing (NMI) and education (UKESF). In 2021 TechWorks took the first steps in establishing a connected community (involving industry, universities and government institutions) to build a connected community in the application of AI/ML to be product development and content. In a recent meeting at Thales in Reading, the priorities for 2023 were established.

  1. Build a shared database for researching into the application of AI/ML
    1. Training data around common shared interests
      1. For DV (Design Verification) this could be open-source IP (e.g. RiscV), test benches and tests that could be used in research (e.g. generating new tests, test selection, etc)
  2. Build a “brokering” capability for sharing information across universities, business and relevant government agencies to cover:

Activity

Universities

Companies

Agencies

Speakers from industry

Looking for speakers

Offering speakers

NA

Projects, placements, interns

Students looking

Companies offering

NA

PhD/Engineering doctorates[1]

Universities offering

Companies looking

NA

Preparing students for work

Students/universities looking

 

 

Employment opportunities

Students looking

Companies looking

NA

Funding competitions

Universities looking

Companies looking

Agencies offering

  1. Engineer a way use the data from the field to justify how and why a decision was made             
    1. This can be used to build a “defendable” case as part of an assurance framework
    2. As well as providing continuous training data
  2. Continue an activity to generate a set of resources for professional engineers who need to work in AI/ML.  This includes best practice guidelines, a “cookbook giving an overview of all AI/ML techniques (far more than just deep learning neural networks) and checklists for new projects. See https://github.com/TechNES-UK/ai-best-practice.

 

 


[1] Note that most companies stated a preference for Engineering Doctorates over PhDs

 

Terminology

All the technologies we consider are types of Machine Learning. Artificial intelligence is a non-specific term that covers a range of applications that use machine learning. We believe the distinction is not significant, and our response applies to both terms. We use the terms interchangeably.

A number of topics are central to this submission:

  1. Transparency and Explainability. The idea that an AI/ML system should not only give an answer but be able to show how that answer was derived from its input data.
  2. Correctness and Bias: The need for our AI to be fit for purpose, and arrive at correct solutions in the context it’s applied without undue bias
  3. Robustness: The need for AI to be functionally robust both to mistakes and happenstance, as well as overt acts from bad actors
  4. Uncertainty: The need for AI to provide transparent, clear, human understandable rationalisations of its solutions, and to quantify a confidence or degree of belief that should be attached to them
  5. Ethics: Our AI must not be used in problem domains that damage people’s fundamental rights, do undue harm, or exhibit bias against marginalised groups
  6. Privacy and Security: Our AI must preserve the privacy and security of the data upon which it was trained

Example 1: We can only supplement justice effectively with an AI if it can rationalise its decisions.

Example 2: We can only recommend a medical treatment on the basis of an AI if we can understand why that treatment will be effective.

Response to specific questions

How effective is current governance of AI in the UK?

       What are the current strengths and weaknesses of current arrangements, including for research?

 

        We consider governance of data (e.g. GDPR) is indirectly governance of AI, by virtue of machine learning relying inherently on the data it is trained on.

        Governance of data and governance of AI cannot be easily separated and so should be considered together. Example: data leakage from AI systems.

        There is a lack of centralization in AI governance. We discuss this point in respect of the legal framework later, but we think the same conclusion is warranted here: Centralization would slow innovation, but there is a need for a central body to provide guidance, funding and coordination.

        AI education levels remain insufficient at all levels.  Those AI entities that have succeeded have AI and data literacy across all levels of the company.

        Informative material is lacking at all levels, but particularly at board and executive level.

        The TechWorks AI initiative is addressing this lack of informative material, with a set of resources to improve the knowledge of professional engineers in AI/ML.

        Ethics, bias, and fairness are specific areas where informative materials are lacking.

        There is a lack of governance surrounding data sharing in industry and academia. Solutions might look toward data trusts and collectives.

 

What measures could make the use of AI more transparent and explainable to the public?

        It is important to emphasise that AI and autonomous systems are still a research area.

        The UKRI Trustworthy Autonomous Systems (TAS) Hub is one group progressing these efforts.

        Active engagement with the public by research groups is essential. Example: the Innovate UK Autonomous Pods project had members of the public travel in the pods and give feedback about their experience.

        We must communicate about transparency and explainability in terms of context and risk in a way that the layperson can understand.

        Example: There are clearly very different consequences of error in an AI that works as part of a filter on a smartphone camera, and one that regularly makes life and death decisions in a healthcare or automotive context.

        Segmentation into risk categories may be an appropriate way to communicate this.

        We discuss the need for uncertainty in AI in the legal framework below, but beyond that, uncertainty is an effective tool for communication in its own right. In many cases, this is significantly more interpretable to a public audience than an explainable system designed for expert users.

 

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

       Are current options for challenging the use of AI adequate and, if not, how can they be improved?

 

        See also the comments on the previous question about active engagement with the public which applies here as well. These two questions are tightly coupled.

        There needs to be further clarification on decisions made fully or partially on the basis of AI algorithms. Existing GDPR regulation gives the right to not be subject to solely automated decisions, but a decision made by a human on the basis of the output of an algorithm which cannot explain itself is equally unfair.

        GDPR should include an explicit right by default to exclude you and your data being used as input to AI algorithms. Example: automated art (DALL-E  and Stable Diffusion) which gives no credit to the artists upon whose art it was trained.

        Review and scrutinization needs to be evaluated in the context of the risk in which the AI algorithm exists.  This is the point about communication of risk made in the previous section.

        There is a big power asymmetry between collectors of data and individuals (who provide that data). At present, many everyday services are so tied into data collection that to opt out of data collection is often to opt out of society. We need to be respectful of this power asymmetry when reviewing and scrutinising decisions.

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

        The current mechanisms of national regulation are typically based on IEC (Europe), UL (North America) and BSI (UK) standards. Standards in development for AI/ML do not yet have a clear understanding of the capabilities and limitations of this technology.

        There is no specific remedy, beyond ensuring these organisations have sufficient time, resource, and expertise.

        There is no central body overseeing AI regulation.  The Turing Institute gives a non-exhaustive list of 108 bodies here (https://www.turing.ac.uk/sites/default/files/2022-07/common_regulatory_capacity_for_ai_the_alan_turing_institute.pdf).  These bodies have a wide range of roles in regulation of AI. Centralization would slow innovation, but there is a need for a central body to provide guidance, funding and coordination.

        Many AI/ML applications will have a safety and security aspect.  In this context decisions taken by the system must always be legally defensible.  We offer no recommendation on how this is to be enforced, but this is an issue that needs resolving.

        The upcoming EU AI Act segments AI applications into different risk categories. This offers many ideas that could be adopted in UK law and is discussed further below in the section on best practice from other nations.

 

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

       Is more legislation or better guidance required?

 

        As discussed above in the previous question on public communication, regulation must consider the context in which AI algorithms are being applied. Stringent regulation is essential for AI dealing with life and death situations.  The same regulation applied to AI for mundane tasks would suffocate progress.

        Quantifying uncertainty (expressing a degree of belief in a solution) should be an important part of AI regulation of equal importance to transparency and explainability. Tools for informing the public about uncertainty are essential

        Example: An AI in an autonomous vehicle context, must decide if there is a cyclist ahead of them at a busy junction.  However what we care most about from a decision-making context is how certain that decision is.

        Transparency and explainability:

        Transparency and explainability are insufficiently regulated currently.  However new regulation must be realised carefully, clearly and in consultation with key stakeholders to avoid stifling innovation.

        We must clearly define transparency and explainability in a way that is technologically coherent and achievable.  Note. This should consider that many commonly used AIs today (deep learning neural networks) are not explainable, and there are currently no other good solutions to some of the problems solved by such AIs.

        The point made in the previous section about the lack of a centralised AI body is equally relevant to this question.

        Similarly, our point that AI and data are sufficiently intertwined that any governance of them must address them both as part of a coherent whole is extremely relevant to any discussion on legal framework.

What lessons, if any, can the UK learn from other countries on AI governance?

 

        While most countries are still at an early stage in their AI governance, there are activities in Europe and North America that could be useful to inform UK legal frameworks.

        The EU Artificial Intelligence Act makes several useful propositions.

        The segmentation of AI risks into categories provides a useful way of addressing AI regulation in the context of different risk levels

        Following on this, the presence of an unacceptable risk category for applications that are likely to breach fundamental rights

        Requirements to disclose when one is interacting with an AI system (for example, bots impersonating humans)

        Requirements to maintain standards of logging, monitoring, and reporting of errors

        US AI Bill of Rights is not currently well developed but echoes many of these ideas

        The US National Artificial Intelligence Initiative offers a useful proposal.

        Acceleration of privacy-enhancing technologies (PETs) to foster data sharing and analytics while preserving data security, privacy, IP protections, etc

        The US CHIPS Act builds on the US National Artificial Intelligence initiative.

 

(November 2022)