Written evidence submitted by the Institute of Physics and Engineering in Medicine (IPEM)

(GAI0051)

About IPEM 

  

 

IPEM’s response 

1. In UK healthcare, governance related to AI is largely achieved via pre-existing regulatory frameworks (for example via medical device legislation and associated standards). We believe that the current lack of AI-specific standards has caused uncertainty over the appropriate steps required to ensure safe, effective development and clinical use of AI. For example, clinical and scientific computing staff may be called upon to evaluate commercial AI auto-contouring software, but there is little guidance on how this should be done.

2. It is noted, however, that the MHRA is embarking on the Software and AI as a Medical Device Change Programme, which will help to provide additional clarity to developers and consumers. New standards such as BS30440 and BS AAMI 34971 are welcome additions to the available literature.

3. We suggest that existing governance arrangements for research are generally adequate for AI-focused studies. However, there remains problems with access to clinical data and inconsistency of interpretation of information governance guidance between trusts.

4. We suggest that healthcare workers are best placed to make the use of AI more transparent and explainable to the public. Improving their understanding of AI should therefore be a priority. If a clinician does not understand an AI tool and what it is and isn’t capable of, then trust (from both a patient and healthcare worker perspective) is likely to be low. Alternatively, if a clinician can use an AI tool with confidence, then the software becomes an asset and can be better explained to the patient and public.

5. We believe that in healthcare, decisions involving the deployment and routine use of AI should be made jointly with healthcare staff (clinicians, nurses, clinical scientists, engineers and others). The use of AI should be audited, with discrepancies between AI / clinician being analysed and subject to a second review if necessary. Longer term post-market surveillance should be used to monitor how AI performs, not just in terms of raw software performance but also how it is used and how it continues to affect the patient pathway. Such an auditable approach could also be used to plot trends and catch any drifts in key indicators.

6. Under existing arrangements a patient can refuse any form of treatment, which could include AI-informed care. However, it may not be obvious to a patient that AI was involved in their treatment, and there is no obligation for a NHS organisation to advertise this. Being open and transparent about decision making, with an emphasis on clinicians taking overall responsibility for patient care, is likely to be the best way to reassure patients and the public. If the education of front-line clinicians can be improved then they will have the confidence to discuss any potential issues that the patient has.

7. We believe that in healthcare, AI regulation is likely best achieved under the MHRA and CQC, bodies which already regulate many aspects of AI development and deployment under pre-existing frameworks. In a similar way to other healthcare technologies, AI relies on healthcare staff and interconnected services to perform adequately. Therefore, regulation should ideally take a systems approach to verify that the wider workflow involving AI is safe, inclusive, effective and robust. To ensure reliability and validity it should ideally be a requirement that assessments of AI performance are based on data that is representative of the wider clinical population, including common as well as uncommon patient and disease characteristics.

8. We believe that the legal framework around healthcare decisions involving AI is not clear.  It is largely untested. For instance, if a clinician makes a decision that disagrees with AI it is not clear how this would impact a liability case. We suggest that it would be beneficial for relevant professional bodies to produce guidance on how AI could and should be used to inform decisions on patient care.

9. We suggest that better guidance, for example in the form of AI-specific development or deployment standards, would be more helpful than more legislation.

10. Other healthcare regulatory agencies around the world, such as the US Food and Drug Administration (FDA), are addressing similar concerns related to AI governance. The MHRA are clearly aware of such international work (for example, collaborating on the good machine learning practice document). However, there are other international bodies not focused on healthcare that have created informative documents. For example, the US government recently produced the Blueprint for an AI Bill of Rights document. This lists five principles and associated practices, all of which are relevant to health and whose requirements extend beyond existing legal obligations for medical devices. The document mandates, for instance, that Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.”

 

(November 2022)