Written Evidence Submitted by TechWorks
(GAI0068)
TechWorks is an established industry association at the core of the UK deep tech community with an ambition to harness our fantastic engineering and innovation to develop the UK’s position as a global technology super-power.
Our mission is to strengthen the UK’s deep tech capabilities as a global leader of future technologies. To do this we form adjacent connected communities that are influential in defining and shaping the advancements of industry – providing a platform to help our members strategically leverage products and services to drive profitable growth.
As an example of a connected community, TechWorks established the Internet of Things Security Foundation (IoTSF) as a non-profit, global membership organisation formed in 2015 as a response to existing and emerging threats in vertical applications of the Internet of Things. IoTSF is now a world-leading IoT security association and expert authority in IoT security and the natural home for IoT product and service vendors, equipment suppliers, network providers, technology adopters, researchers and industry professionals. We aid industry in making it safe to connect through a dedicated program of projects, guidance, reports, events, training, standards and advocacy.
TechWorks also has established connected networks in automotive (AESIN), manufacturing (NMI) and education (UKESF). In 2021 TechWorks took the first steps in establishing a connected community (involving industry, universities and government institutions) to build a connected community in the application of AI/ML to be product development and content. In a recent meeting at Thales in Reading, the priorities for 2023 were established.
Activity | Universities | Companies | Agencies |
Speakers from industry | Looking for speakers | Offering speakers | NA |
Projects, placements, interns | Students looking | Companies offering | NA |
PhD/Engineering doctorates[1] | Universities offering | Companies looking | NA |
Preparing students for work | Students/universities looking |
|
|
Employment opportunities | Students looking | Companies looking | NA |
Funding competitions | Universities looking | Companies looking | Agencies offering |
[1] Note that most companies stated a preference for Engineering Doctorates over PhDs
All the technologies we consider are types of Machine Learning. Artificial intelligence is a non-specific term that covers a range of applications that use machine learning. We believe the distinction is not significant, and our response applies to both terms. We use the terms interchangeably.
A number of topics are central to this submission:
Example 1: We can only supplement justice effectively with an AI if it can rationalise its decisions.
Example 2: We can only recommend a medical treatment on the basis of an AI if we can understand why that treatment will be effective.
● We consider governance of data (e.g. GDPR) is indirectly governance of AI, by virtue of machine learning relying inherently on the data it is trained on.
● Governance of data and governance of AI cannot be easily separated and so should be considered together. Example: data leakage from AI systems.
● There is a lack of centralization in AI governance. We discuss this point in respect of the legal framework later, but we think the same conclusion is warranted here: Centralization would slow innovation, but there is a need for a central body to provide guidance, funding and coordination.
● AI education levels remain insufficient at all levels. Those AI entities that have succeeded have AI and data literacy across all levels of the company.
○ Informative material is lacking at all levels, but particularly at board and executive level.
○ The TechWorks AI initiative is addressing this lack of informative material, with a set of resources to improve the knowledge of professional engineers in AI/ML.
○ Ethics, bias, and fairness are specific areas where informative materials are lacking.
● There is a lack of governance surrounding data sharing in industry and academia. Solutions might look toward data trusts and collectives.
● It is important to emphasise that AI and autonomous systems are still a research area.
● The UKRI Trustworthy Autonomous Systems (TAS) Hub is one group progressing these efforts.
● Active engagement with the public by research groups is essential. Example: the Innovate UK Autonomous Pods project had members of the public travel in the pods and give feedback about their experience.
● We must communicate about transparency and explainability in terms of context and risk in a way that the layperson can understand.
○ Example: There are clearly very different consequences of error in an AI that works as part of a filter on a smartphone camera, and one that regularly makes life and death decisions in a healthcare or automotive context.
○ Segmentation into risk categories may be an appropriate way to communicate this.
● We discuss the need for uncertainty in AI in the legal framework below, but beyond that, uncertainty is an effective tool for communication in its own right. In many cases, this is significantly more interpretable to a public audience than an explainable system designed for expert users.
● See also the comments on the previous question about active engagement with the public which applies here as well. These two questions are tightly coupled.
● There needs to be further clarification on decisions made fully or partially on the basis of AI algorithms. Existing GDPR regulation gives the right to not be subject to solely automated decisions, but a decision made by a human on the basis of the output of an algorithm which cannot explain itself is equally unfair.
● GDPR should include an explicit right by default to exclude you and your data being used as input to AI algorithms. Example: automated art (DALL-E and Stable Diffusion) which gives no credit to the artists upon whose art it was trained.
● Review and scrutinization needs to be evaluated in the context of the risk in which the AI algorithm exists. This is the point about communication of risk made in the previous section.
● There is a big power asymmetry between collectors of data and individuals (who provide that data). At present, many everyday services are so tied into data collection that to opt out of data collection is often to opt out of society. We need to be respectful of this power asymmetry when reviewing and scrutinising decisions.
● The current mechanisms of national regulation are typically based on IEC (Europe), UL (North America) and BSI (UK) standards. Standards in development for AI/ML do not yet have a clear understanding of the capabilities and limitations of this technology.
○ There is no specific remedy, beyond ensuring these organisations have sufficient time, resource, and expertise.
● There is no central body overseeing AI regulation. The Turing Institute gives a non-exhaustive list of 108 bodies here (https://www.turing.ac.uk/sites/default/files/2022-07/common_regulatory_capacity_for_ai_the_alan_turing_institute.pdf). These bodies have a wide range of roles in regulation of AI. Centralization would slow innovation, but there is a need for a central body to provide guidance, funding and coordination.
● Many AI/ML applications will have a safety and security aspect. In this context decisions taken by the system must always be legally defensible. We offer no recommendation on how this is to be enforced, but this is an issue that needs resolving.
● The upcoming EU AI Act segments AI applications into different risk categories. This offers many ideas that could be adopted in UK law and is discussed further below in the section on best practice from other nations.
● As discussed above in the previous question on public communication, regulation must consider the context in which AI algorithms are being applied. Stringent regulation is essential for AI dealing with life and death situations. The same regulation applied to AI for mundane tasks would suffocate progress.
● Quantifying uncertainty (expressing a degree of belief in a solution) should be an important part of AI regulation of equal importance to transparency and explainability. Tools for informing the public about uncertainty are essential
○ Example: An AI in an autonomous vehicle context, must decide if there is a cyclist ahead of them at a busy junction. However what we care most about from a decision-making context is how certain that decision is.
● Transparency and explainability:
○ Transparency and explainability are insufficiently regulated currently. However new regulation must be realised carefully, clearly and in consultation with key stakeholders to avoid stifling innovation.
○ We must clearly define transparency and explainability in a way that is technologically coherent and achievable. Note. This should consider that many commonly used AIs today (deep learning neural networks) are not explainable, and there are currently no other good solutions to some of the problems solved by such AIs.
● The point made in the previous section about the lack of a centralised AI body is equally relevant to this question.
● Similarly, our point that AI and data are sufficiently intertwined that any governance of them must address them both as part of a coherent whole is extremely relevant to any discussion on legal framework.
● While most countries are still at an early stage in their AI governance, there are activities in Europe and North America that could be useful to inform UK legal frameworks.
● The EU Artificial Intelligence Act makes several useful propositions.
○ The segmentation of AI risks into categories provides a useful way of addressing AI regulation in the context of different risk levels
○ Following on this, the presence of an unacceptable risk category for applications that are likely to breach fundamental rights
○ Requirements to disclose when one is interacting with an AI system (for example, bots impersonating humans)
○ Requirements to maintain standards of logging, monitoring, and reporting of errors
● US AI Bill of Rights is not currently well developed but echoes many of these ideas
● The US National Artificial Intelligence Initiative offers a useful proposal.
○ Acceleration of privacy-enhancing technologies (PETs) to foster data sharing and analytics while preserving data security, privacy, IP protections, etc
● The US CHIPS Act builds on the US National Artificial Intelligence initiative.
(November 2022)