WRITTEN EVIDENCE SUBMITTED BY THE ROYAL COLLEGE OF RADIOLOGISTS

GAI0087

 

 

About the RCR

 

  1. The Royal College of Radiologists (RCR) is the professional membership body for doctors specialising in the fields of clinical radiology (including interventional radiology) and clinical oncology. We provide leadership to improve the standard of medical practice and training across both disciplines.

 

  1. We engage with our Fellows, members and multiple clinical partners, combining the latest research to improve training and the development of guidelines to support clinical radiology and clinical oncology patient care. This enables us to effectively educate and support doctors throughout their career by providing practical guidance and supporting individuals and their clinical services to facilitate better patient outcomes.

 

Introduction

 

  1. This is a very broad inquiry. The RCR believes AI regulation should consider the type of AI, its intended purpose, associated risks and the setting it is used in. Informed by the two specialties we represent, the RCR’s general view is that it was difficult to provide too many specific comments. Therefore, much of our commentary below considers the interests of the health sector more generally. Nonetheless, these comments are still pertinent to the interests of our two specialties.

 

How effective is current governance of AI in the UK?

 

  1. This is a very broad question. In general, governance is not very effective when it comes to putting it into practice. This is because there are limited formal processes for purely AI governance. This currently depends on the type of AI used and the setting it is used in.

 

  1. In healthcare, each trust will have its own information governance processes that need to be followed. It is not clear to departments what is and what is not safe to use. As there is limited formal guidance, the use of AI in medical imaging research is very heterogenous.

 

  1. The Data Protection Act plays a significant role indirectly for AI governance, due to restrictions on data sharing. Despite the benefits of data protection, it potentially could limit the successful development of medical image-based AI software due to concerns regarding sharing data. Successful AI algorithms rely on large volumes of high-quality training data. Governance strategies need to account for how these can be safely and successfully collected.

 

What are the current strengths and weaknesses of current arrangements, including for research?

 

Strengths:

 

  1. We support the Medicines & Healthcare products Regulatory Agency (MHRA)’s Software and AI as a Medical Device Change Programme Roadmap, believing it to have strong work packages and a clear plan with good objectives.

 

Weaknesses: research

 

  1. On data sharing and use, we believe there is still some lack of clarity around:

 

  1. On research development: there needs to be clear research phases – as per drug development – for planning of commercial or non-commercial studies, and to ensure predefined metrics for studies that can be considered by regulators.

 

  1. On information sharing: there could be more information sharing with external regulators (eg United States Food and Drug Administration [FDA] or European Union bodies) to decrease the time taken for items under review to reach approval.

 

  1. Due to the uniqueness of the national service, the NHS has a unique opportunity of developing large highly annotated datasets. But the biggest risk in research is on the availability of such large, well-annotated datasets, both for training and for testing. There is a need to address this, including by making sure that the rules/regulations support this.

 

Weaknesses: non-research

 

  1. As per Work Package 1 of the aforementioned Roadmap, there is still some lack of clarity around what is software as a medical device (SaMD) that falls under the regulation and what is software that does not. This includes questions around how to ensure the device performs as advertised. In imaging, for example, if a tool prioritising cases for reporting on a picture archiving and communication system does not fall under medical device regulation, how would its performance be evaluated?

 

  1. There is a need for clear risk stratification of algorithms (for proportionate regulation), eg:

 

What measures could make the use of AI more transparent and explainable to the public?

 

  1. There is a need for greater public awareness. But given there is very little AI being used in practice, the suitability of running activities to raise such awareness is debatable. General public awareness campaigns could be considered. If AI is used in hospitals/for clinical care, information leaflets or information that can be circulated on NHS websites could potentially be helpful. These could reassure people that AI is supervised by a clinician, that any imaging used is anonymised and so on.

 

  1. A number of general principles should also be articulated, including around:
  1. In addition to the principles mentioned in paragraph 15, the following should also be considered:

 

  1. Although AI tools are registered by MHRA once CE marked, there is no way of knowing when and where the tools are deployed. With the aim to keep patients safe, it may be beneficial to set up a registry of deployed AI tools. This would increase public transparency and facilitate independent auditing, which in turn would increase public trust and provide independent scrutiny.

 

  1. A legal framework and the existence of regulatory bodies (required for the different sectors AI is used in) could reassure the public. These would answer questions around clarity of type of device (of AI and Machine Learning [ML]), what is expected prior to approval and what is expected following approval.

 

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

 

  1. Decisions involving AI depend on its intended purpose and the potential risks. National guidance from medical royal colleges/National Institute for Health and Care Excellence (NICE) for medical purposes would be helpful. But guidance would need to be specific to the use. For example, guidance would differ for auto-contouring in radiotherapy that will always be supervised by a clinician, compared with AI detection/diagnostic algorithms that may not always be supervised by a clinician.

 

  1. Additional suggestions for consideration:

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

  1. Today, it is unclear which specific body/bodies should regulate AI in the health sector. AI regulation very much depends on the intended purpose and how much is overseen by humans. As the technology’s purpose will differ (eg between sectors and applications), individual bodies may be needed to provide specific guidance.

 

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

 

  1. The RCR believes better legislation and guidance is needed. Legislation and guidance for the use of AI should be updated to reflect current uses. Considerations for medical imaging AI could potentially relate to data ownership and data sharing with third parties (for technology development). Guidance regarding responsibility for failures would also be helpful.

 

What lessons, if any, can the UK learn from other countries on AI governance?

 

  1. Emerging FDA proposal for SaMD regulation:

 

The rapid pace of innovation in digital health poses challenges for regulation. New regulatory frameworks are essential to allow regulatory bodies to ensure safety and effectiveness of new devices, without slowing progress.

 

a) FDA software precertification programme (pilot):

 

The FDA would first evaluate the developer. If they demonstrate rigorous processes, then any product would undergo a streamlined review process. However, further challenges for approval would be needed for devices that learn and evolve, as opposed to those devices that are fixed/have a locked algorithm. This is because adaptive algorithms have higher levels of risk (product changes in real time and changes are unpredictable). It is now agreed how soon a new review process would need to take place to ensure the tool remains safe and effective.

 

b) A shortcoming of some existing premarket FDA review pathways is that they were not necessarily designed to evaluate AI. In response, the FDA issued a new framework:

 

The proposed framework was based on four principles, which included clear expectations on quality systems and good ML practices, premarket assessment of SaMD product, routine monitoring of SaMD products by manufacturers to determine when algorithm changes require FDA review, and transparency and real-world performance monitoring.[1]

 

c) Remaining questions we have on this particular field:

 

  1. There are two additional areas in the US which the UK could draw inspiration from.

 

(November 2022)

 

Page 4 of 4

 


[1] www.itnonline.com/article/what%E2%80%99s-next-ai-regulations-medical-imaging (last accessed 25.11.22)

[2] www.akingump.com/en/news-insights/new-fda-guidance-clarifies-exemptions-for-digital-health.html (last accessed 25.11.22)