WRITTEN EVIDENCE SUBMITTED BY THE ROYAL COLLEGE OF RADIOLOGISTS
GAI0087
About the RCR
- The Royal College of Radiologists (RCR) is the professional membership body for doctors specialising in the fields of clinical radiology (including interventional radiology) and clinical oncology. We provide leadership to improve the standard of medical practice and training across both disciplines.
- We engage with our Fellows, members and multiple clinical partners, combining the latest research to improve training and the development of guidelines to support clinical radiology and clinical oncology patient care. This enables us to effectively educate and support doctors throughout their career by providing practical guidance and supporting individuals and their clinical services to facilitate better patient outcomes.
Introduction
- This is a very broad inquiry. The RCR believes AI regulation should consider the type of AI, its intended purpose, associated risks and the setting it is used in. Informed by the two specialties we represent, the RCR’s general view is that it was difficult to provide too many specific comments. Therefore, much of our commentary below considers the interests of the health sector more generally. Nonetheless, these comments are still pertinent to the interests of our two specialties.
How effective is current governance of AI in the UK?
- This is a very broad question. In general, governance is not very effective when it comes to putting it into practice. This is because there are limited formal processes for purely AI governance. This currently depends on the type of AI used and the setting it is used in.
- In healthcare, each trust will have its own information governance processes that need to be followed. It is not clear to departments what is and what is not safe to use. As there is limited formal guidance, the use of AI in medical imaging research is very heterogenous.
- The Data Protection Act plays a significant role indirectly for AI governance, due to restrictions on data sharing. Despite the benefits of data protection, it potentially could limit the successful development of medical image-based AI software due to concerns regarding sharing data. Successful AI algorithms rely on large volumes of high-quality training data. Governance strategies need to account for how these can be safely and successfully collected.
What are the current strengths and weaknesses of current arrangements, including for research?
Strengths:
- We support the Medicines & Healthcare products Regulatory Agency (MHRA)’s ‘Software and AI as a Medical Device Change Programme – Roadmap’, believing it to have strong work packages and a clear plan with good objectives.
Weaknesses: research
- On data sharing and use, we believe there is still some lack of clarity around:
- National Health Service (NHS) data opt out (for example, if data is fully anonymised/kept within the NHS firewall, is it OK to use?)
- Data use for commercial developments, including questions around ethics and how this fits with the data opt out
- When commercial use of NHS data is allowed
- Adaptive algorithms (how to ensure appropriate use of patient data for adaptive algorithmic change).
- On research development: there needs to be clear research phases – as per drug development – for planning of commercial or non-commercial studies, and to ensure predefined metrics for studies that can be considered by regulators.
- On information sharing: there could be more information sharing with external regulators (eg United States Food and Drug Administration [FDA] or European Union bodies) to decrease the time taken for items under review to reach approval.
- Due to the uniqueness of the national service, the NHS has a unique opportunity of developing large highly annotated datasets. But the biggest risk in research is on the availability of such large, well-annotated datasets, both for training and for testing. There is a need to address this, including by making sure that the rules/regulations support this.
Weaknesses: non-research
- As per Work Package 1 of the aforementioned Roadmap, there is still some lack of clarity around what is software as a medical device (SaMD) that falls under the regulation and what is software that does not. This includes questions around how to ensure the device performs as advertised. In imaging, for example, if a tool prioritising cases for reporting on a picture archiving and communication system does not fall under medical device regulation, how would its performance be evaluated?
- There is a need for clear risk stratification of algorithms (for proportionate regulation), eg:
- Higher level of scrutiny for higher risk
- Lower level of scrutiny for lower risk.
What measures could make the use of AI more transparent and explainable to the public?
- There is a need for greater public awareness. But given there is very little AI being used in practice, the suitability of running activities to raise such awareness is debatable. General public awareness campaigns could be considered. If AI is used in hospitals/for clinical care, information leaflets or information that can be circulated on NHS websites could potentially be helpful. These could reassure people that AI is supervised by a clinician, that any imaging used is anonymised and so on.
- A number of general principles should also be articulated, including around:
- The unmet clinical need and the potential for the AI tool to meet this need
- The tested performance of any AI tool, including which dataset is used to train; which dataset is used to test; and any potential bias related to the training and testing
- The methods used to explain the AI performance.
- In addition to the principles mentioned in paragraph 15, the following should also be considered:
- Including public member(s) in assessment boards
- Ensuring post-marketing surveillance is strong, including possibly via a registry.
- Although AI tools are registered by MHRA once CE marked, there is no way of knowing when and where the tools are deployed. With the aim to keep patients safe, it may be beneficial to set up a registry of deployed AI tools. This would increase public transparency and facilitate independent auditing, which in turn would increase public trust and provide independent scrutiny.
- A legal framework and the existence of regulatory bodies (required for the different sectors AI is used in) could reassure the public. These would answer questions around clarity of type of device (of AI and Machine Learning [ML]), what is expected prior to approval and what is expected following approval.
How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
- Decisions involving AI depend on its intended purpose and the potential risks. National guidance from medical royal colleges/National Institute for Health and Care Excellence (NICE) for medical purposes would be helpful. But guidance would need to be specific to the use. For example, guidance would differ for auto-contouring in radiotherapy that will always be supervised by a clinician, compared with AI detection/diagnostic algorithms that may not always be supervised by a clinician.
- Additional suggestions for consideration:
- Scrutiny via MHRA panels, RCR panels and/or other AI experts
- External evaluation, or at least clarity, concerning exactly what was done by the relevant company in the clinical testing phase
- Publication of performance metrics of the algorithm.
How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
- Today, it is unclear which specific body/bodies should regulate AI in the health sector. AI regulation very much depends on the intended purpose and how much is overseen by humans. As the technology’s purpose will differ (eg between sectors and applications), individual bodies may be needed to provide specific guidance.
To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
- The RCR believes better legislation and guidance is needed. Legislation and guidance for the use of AI should be updated to reflect current uses. Considerations for medical imaging AI could potentially relate to data ownership and data sharing with third parties (for technology development). Guidance regarding responsibility for failures would also be helpful.
What lessons, if any, can the UK learn from other countries on AI governance?
- Emerging FDA proposal for SaMD regulation:
The rapid pace of innovation in digital health poses challenges for regulation. New regulatory frameworks are essential to allow regulatory bodies to ensure safety and effectiveness of new devices, without slowing progress.
a) FDA software precertification programme (pilot):
The FDA would first evaluate the developer. If they demonstrate rigorous processes, then any product would undergo a streamlined review process. However, further challenges for approval would be needed for devices that learn and evolve, as opposed to those devices that are fixed/have a locked algorithm. This is because adaptive algorithms have higher levels of risk (product changes in real time and changes are unpredictable). It is now agreed how soon a new review process would need to take place to ensure the tool remains safe and effective.
b) A shortcoming of some existing premarket FDA review pathways is that they were not necessarily designed to evaluate AI. In response, the FDA issued a new framework:
The proposed framework was based on four principles, which included clear expectations on quality systems and good ML practices, premarket assessment of SaMD product, routine monitoring of SaMD products by manufacturers to determine when algorithm changes require FDA review, and transparency and real-world performance monitoring.[1]
c) Remaining questions we have on this particular field:
- When do changes to SaMD or adaptive ML require premarket review?
- Will precertification extend beyond the pilot phase?
- Will there be clarity on the distinction between software that is regulated and that which is non-regulated?
- How will software updates impact on performance, and how will these be regulated and communicated to end users?
- There are two additional areas in the US which the UK could draw inspiration from.
- In terms of FDA review, there are software function exemptions identified in the US 21st Century Cures Act. There are also three mandatory purposes that an exempt Clinical Decision Support software must have. These approaches seem sensible.[2]
- Authorisation of medical devices is critical to ensure trust. The FDA publishes summaries or statements for each approved medical device.
(November 2022)
Page 4 of 4