Written Evidence Submitted by Professor Adrian Hopgood, University of Portsmouth

(GAI0030)

 

What measures could make the use of AI more
transparent and explainable to the public?

Introduction

I am Professor of Intelligent Systems at the University of Portsmouth. I have worked in the field of AI since 1985, in both universities and industry. At the University of Portsmouth, I lead the Cluster in AI and its Practical Applications, as well as the interdisciplinary Theme of Future & Emerging Technologies. I am a Chartered Engineer, Fellow of the BCS (British Computer Society – the Chartered Institute for IT), and a committee member for the BCS Specialist Group on Artificial Intelligence. I have published more than 100 peer-reviewed articles and a best-selling textbook, Intelligent Systems for Engineers and Scientists: A Practical Guide to Artificial Intelligence, 4th Edition, CRC Press, 2022.

I am responding because my expertise in AI technology is broader than the current dominant focus on machine learning. I can therefore bring a different technological perspective. I argue here that complementary AI technologies can help to tackle the challenges of transparency and explainability.

Evidence

The opacity of AI is sometimes cited as a shortcoming that results in a lack of trust and a difficulty in scrutinising the outputs. If that opacity is genuinely an AI problem, then AI can also provide a solution. In other words, AI technologies that are inherently transparent can be used to illuminate the opaque ones.

The definition of AI, according to UK Research and Innovation (UKRI), is “… a suite of computational technologies and tools that aim to reproduce or surpass abilities of humans to undertake complex tasks” This is a broad definition that, significantly, mentions multiple technologies, complex tasks, and human capabilities. Yet much of the focus is currently on a single tool, i.e., machine learning, performing tasks that are narrow rather than complex, with no reference to human performance.

Machine learning is a process that involves finding patterns in datasets. It is a powerful tool that has driven much of the current interest in AI, but its mechanisms are opaque. Typically, machine learning is used for classification tasks. The algorithm draws its conclusion based on the best match from previously seen examples. It has no concepts surrounding a decision that it takes; it simply applies a label to a data item.

Before the current explosion of interest in machine learning, other forms of AI were already at a state of maturity. Many of these techniques were knowledge-based AI, in which a computer system is designed to capture human expertise. The concepts that are being reasoned about are explicit and textual. Crucially, these techniques are intrinsically transparent, as a chain of logic can be viewed that leads from a set of input information to the conclusions drawn.

Knowledge-based AI and machine learning are not exclusive choices, as each can be used the complement the other. As an example, consider the interpretation of medical X-rays. A machine learning system can learn from thousands of examples to locate fractures. Yet the system has no concept of ‘fracture’, ‘patient’ or ‘image’. The algorithm is just labelling data, based on prior examples. A human clinician, on the other hand, will use other clues beyond the pixels of the X-ray image, such as the patient’s age and strength, the angles between specific joints, and the bone mobility. All this expertise can be captured within a set of knowledge-based software agents that hold explicit concepts about the patient and their anatomy, as well as the images and equipment. Such agents can sit alongside a machine-learning algorithm to verify that its outputs are consistent with medical knowledge. Further, they can provide a logical argument for why the conclusion makes sense. Conversely, they can flag any questionable decisions from an algorithm.

In conclusion, we need to get away from using a single-technology version of AI and complaining about its opacity. Instead, we can use the full AI toolkit, comprising concept-rich knowledge-based systems alongside data-driven algorithms. That way, we can progress towards the original definition of AI while addressing the issues of transparency and explainability.

 

(November 2022)