Written Evidence Submitted by the Academy of Medical Sciences

(GAI0072)

Summary

Introduction

The Academy’s mission is to help create an open and progressive biomedical and health research sector to improve the health of people everywhere. In this response, we will comment specifically on the regulation and governance of artificial intelligence (AI) used in healthcare and health research. Our response to this call for evidence is based on our previous policy work on AI and health and other relevant topics (e.g., health data), as well as evidence from members of our elected Fellowship and leadership programme,[8] which include some of the UK’s foremost experts in clinical and academic medical research.

Effective regulation of AI-based technologies is an emerging challenge for many countries, and is particularly important in the healthcare space, where there may be serious implications for health.[9] AI offers new methods of healthcare management and delivery, with associated opportunities for improving healthcare efficiency and cost effectiveness. Participants of a roundtable on AI and health (held in 2019 by the Academy in partnership with the Medical Research Council (MRC) and the National Institute for Health Research (NIHR)) explored challenges, opportunities and priorities for development of AI-driven healthcare technologies.[10] Examples of the use of AI in health were discussed, ranging from accelerated, high accuracy diagnosis to digital resource allocation for improving health system efficiencies. AI also presents an opportunity to improve the efficiency of biomedical research and development for example, through the rapid identification of novel drug targets.[11],[12]

However, the evolving nature of some AI-based health technologies raises the question of how to ensure performance is monitored and maintained following regulatory approval and adoption in healthcare systems (also known as ‘post-marketing surveillance’). Furthermore, many AI-based health technologies are (or are in part) a ‘black box’, meaning it is not evident to the end-user how the output of a system has been generated. This is a particular problem for healthcare professionals, patients and carers trying to make decisions about care. To realise the potential benefits while avoiding unintentional harm, effective regulation and governance of AI-based health technologies is essential. Critical in the regulation of AI in healthcare is to ensure appropriate regulatory safeguards whilst avoiding stifling research and innovation in the field in order to deliver timely health benefits to patients.[13]

The Academy previously developed a set of principles to guide development, evaluation and employment of data-driven technologies (including those using AI) in health and social care, reflecting the values and expectations of patients, the public and healthcare professionals. The principles were developed based on input from experts and key stakeholders from across academia, digital health, data and pharma companies, the NHS, learned societies and the regulatory, funding and charity sectors.[14] They are highly relevant to the governance of AI-based health technologies, which are by nature data-driven technologies.

  1. Purpose, value and benefits: Data-driven technologies should be designed and used for clearly defined purposes that uphold the social values of the NHS and benefit individuals, the NHS, or society.
  2. Privacy and rights: Data-driven technologies should be designed and used in ways and settings that respect and protect the privacy, rights and choices of patients and the public.
  3. Public engagement and partnership: Those determining the purpose and uses of data-driven technologies should include patients and the public as active partners.
  4. NHS data stewardship and responsibilities: The NHS, and those acting on its behalf, should demonstrate their continued trustworthiness by ensuring responsible and effective stewardship of patient data and data-driven technologies in the NHS.
  5. Evaluation and regulation: Data-driven technologies should be evaluated and regulated in ways that build understanding, confidence and trust, and guide their use in the NHS.

Research and regulation of healthcare products are areas of longstanding excellence in the UK, including that of medical devices (broadly defined as products with therapeutic and/or diagnostic capabilities).[15] However, governance of AI-based health technologies, many of which are classed as medical devices, in the UK is a developing area. In the Autumn Statement 2022, the Government announced that it would be tasking the Government Chief Scientific Adviser and National Technology Officer, Sir Patrick Vallance, with reviewing how the UK can better regulate emerging technologies to enable their rapid and safe introduction.[16] The Select Committee should consider the findings of this review, should it prove relevant to the regulation of AI-based health technologies.

As health is a devolved function, there may be differences in how the research, development and deployment of AI-based health technologies is regulated and governed in the four nations of the UK. Several key organisations involved in the governance of research, development and deployment of AI-based health technologies and systems for healthcare are listed below, but other bodies may also be involved in the devolved nations:

Consultation questions

  1. How effective is current governance of AI in the UK?

The UK benefits from longstanding excellence in both the research and the regulation of healthcare products, including medical devices (broadly defined as products with therapeutic and/or diagnostic capabilities).[19] However, the current governance of AI in the UK in the context of health is a developing area and there are gaps and areas for further improvement. We have heard concerns about the capacity of regulatory systems in the UK, including the MHRA (and Approved Bodies supporting the MHRA in evaluation of medical devices), and are aware that the need to increase capacity to meet current needs was highlighted in the 2021 Regulatory Horizons Council Report on Medical Devices.[20] Sufficient resourcing is essential for governing bodies to meet increasing regulatory demand, including of AI, in an effective, timely manner, and we encourage the Committee to assure itself that this is provided. Below, we describe five areas where the governance of AI for health could be strengthened.

1.1            Governing the use of data for research and development of AI-based health technologies

Access to high quality, representative health datasets is essential for the successful research, development and downstream deployment of AI-based technologies in the healthcare system. It is essential that mechanisms are provided to enable data access and linkage whilst upholding the duty of confidentiality and protecting the data subject’s right to privacy.[21]

Participants at the Academy’s ‘Artificial intelligence and health’ roundtable in 2019 highlighted data access and governance as a priority for AI in health.[22] We have heard that the lack of ‘joined up’ health datasets (due to departmental and regional data collection/curation and siloed systems) and fragmented application systems for data access are limiting the advancement of academic and industry research. A move towards a federated model for data access could alleviate these issues, with central control and secure access, for which Health Data Research (HDR) UK could play a key role.[23] Strong regulatory frameworks for health data use are crucial due to the sensitivity of health data and the need for public trust in data stewardship, access, curation and use in research.[24]

 

1.2            Importance of public and user engagement in research and development of AI-based health interventions

We have heard that the HRA provides a robust framework through which it effectively governs and regulates clinical research involving health interventions, including interventions using AI. In addition, there are specific protocols and reporting standards for clinical trials of AI-based health interventions that help to provide consistency.[25] However, further progress should be made in engaging end-users (patients, carers, the public and healthcare professionals) in the research and development of AI-based health interventions.

The Academy champions meaningful involvement of patients, carers and the public in research, including in the development of AI-based health technologies.[26],[27],[28] This was raised as priority in the Academy’s AI and health roundtable, where participants (with expertise from across the life sciences and AI landscape in academia, industry, the NHS, and funding bodies) proposed that the NHS, as a highly-trusted organisation by the public, could play a key role in this regard, especially as its use of AI-based health technologies increases. Research funders, the HRA and the MHRA also have an important role in encouraging user-led design and co-creation of new AI-based medical devices.[29],[30]

Two-way communication between developers/researchers and end users (including healthcare professionals, patients, carers and the public) is key to ensuring that AI-based technologies are developed to address unmet clinical needs in a way that is transparent and has acceptable levels of explainability (see question 2).[31] The UK demonstrates growing strength in this area, supported by the release of standards and guidance for patient and public involvement in research by NIHR in 2019.[32] However, further progress can still be made in the uptake and (uniform) application of this guidance, especially to guard against tokenistic approaches which undermine good practice. Adequate resourcing and capacity in the healthcare system will be important to enable successful engagement of end users with developers/researchers.

1.3            Medical device regulation and approval

Many AI-based health technologies and systems qualify as medical devices, broadly defined by the MHRA as products (including software) used for diagnosis and/or therapeutic purposes.[33] The MHRA works closely with the HRA to ensure regulatory frameworks are coherent, streamlined, and communicated clearly to the research community.[34] In 2021, the MHRA announced the software and AI as a Medical Device Change Programme’ or roadmap,[35] a programme of work to ensure regulatory requirements for software and AI are clear and patients are protected. This roadmap builds upon wider reforms for medical devices regulation.[36] The Academy strongly supports the MHRA’s proposal to update the existing regulatory frameworks to ensure that they are fit-for-purpose for medical devices using AI.Error! Bookmark not defined. In June 2022, the MHRA announced a set of revised commitments following a consultation with the sector and we were pleased to see the:

We have heard that the MHRA’s openness to working with developers during development and evaluation of AI-based medical devices has facilitated the successful fulfilment of regulatory criteria. However, as highlighted above (section 1), sufficient resourcing will be needed to allow the MHRA and other governing bodies to meet increasing demand in a timely manner.

In future, the regulation of AI-based medical devices needs to take into consideration several challenges in the field of AI, including:

In particular, the use of AI would benefit from further or more specific regulation and guidance from the MHRA in the following areas:

 

1.4            Training

It is important that end-users of an AI-based health technology are trained to ensure the potential benefits are realised and to guard against potential harm. We have heard that there is a gap in the training and understanding of AI-based health technologies by end-users (including patients, carers and healthcare professionals). Guidance on the nature of such training for healthcare professionals, as well as guidance for users of AI-based health technology (healthcare professionals and patients), has recently been published by the NHS AI Lab and Health Education England (HEE), and the NHS England Transformation Directorate respectively.[39],[40] Training is explored further in response to question 2.

1.5            A gap in governance of commercially available AI-based health technologies

Many commercially available AI-based health technologies are not classified as medical devices (such as many health apps or wearable sensors) and so are not regulated by the MHRA, only by general consumer protections. The inquiry should pay particular attention to health technologies not classified as medical devices (including those using AI), which may pose risks to data security and privacy, as well as risks to health due to poor or misleading outputs.[41] These risks were discussed in a workshop hosted jointly by the Academy’s FORUM and the Royal Academy of Engineering in 2014 on ‘Health apps: regulation and quality control’.[42] A challenge highlighted by participants at the meeting was the sheer number of health apps available and rate at which these are developed and/or updated, making timely and prompt evaluations particularly challenging. Therefore, participants emphasised the importance of empowering informed choices of users. These observations made in 2014 have since been echoed in a more recent workshop on ’data-driven health’ and comments from Academy Fellows.[43] Some organisations, such as ORCHA, seek to provide guidance and a systematic approach to assessing health apps, which may or may not be classified as medical devices, working closely with UK regulatory agencies.[44] However, assessment by such organisations only covers a subsection of AI-based technologies. Given the potential risk to health, the inquiry should consider whether further governance and regulation of such, currently minimally regulated, health technologies (including those using AI) may be appropriate – these would need to be proportionate in order to safeguard users health while also enabling innovation.

 

  1. What measures could make the use of AI more transparent and explainable to the public?

Using explainable AI in a transparent way is especially salient in the healthcare context, where understanding both how the AI-based health technology was developed and how it works can be key to making informed healthcare decisions and mitigating possible risks of harm (e.g. due to errors, biases, or inappropriate use).[45] This is also important for healthcare commissioners, regulators and payers, who need to evaluate the effectiveness, value and suitability of AI-based health technologies. Transparency and explainability also underpin trust and understanding of AI-based health technologies by patients, carers and healthcare professionals. Developers should strive to develop as explainable AI-based health technologies as possible and be transparent about their approaches. The MHRA roadmap highlights the importance of transparency of AI-based medical devices, requiring scrutiny of scientific validity and clinical performance.[46] Guidelines for developers are needed to ensure these plans are enacted and should be developed by the MHRA.

The inquiry should consider carefully to what extent it is appropriate to require AI-based health technologies to be made explainable as there are complex technical, legal and ethical implications.[47] Some of the experts that we consulted felt that a lack of explainability should not prevent the use of AI-based health technologies where there is significant and robust evidence of benefit to health outcomes, because this could prevent or delay realisation of potential health benefits to patients. However, some respondents to the MHRA consultation on the future of medical devices suggested legislation should mandate explainable AI.[48] Going forward, measures should focus on:

Enhancing the wider public’s awareness and understanding of AI is important to allow the appropriate development, deployment and use of AI-based health technologies, to build and maintain public trust and to realise potential health benefits.[54] General awareness could be improved by targeting education in schools. One example shared with us was the integration of digital literacy into school curricula, being undertaken in the Netherlands in a program focusing on ‘media literacy, information literacy and computational thinking’.[55]

Increasing the explainability and transparency of AI-based health technologies to the end-users requires a multi-faceted approach, linking efforts across sectors and governing bodies. The trustworthiness of AI-based technologies will be enhanced by researchers and developers following best practice, striving to develop explainable AI algorithms, and reporting methods and results transparently. Education and training in the use of AI for patients, carers, the public, healthcare professionals and decision makers will also help to build and maintain trust in the use of AI-based technologies in healthcare.

 

  1. How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

In the context of healthcare, decisions involving AI occur at several levels, from day-to-day decisions by patients, carers and healthcare professionals influenced by use of AI-based health technologies (e.g., diagnostics), to high level organisational decisions to deploy AI, or even governmental policies on the use of AI in the healthcare system. The review and scrutiny will vary based on what these decisions are and for what purposes. Scrutiny of decisions involving AI and decisions to use AI requires transparency in the development and performance of AI-based health technologies, and ideally explainability in how they reach their results, as discussed in question 2.

At the level of health organisations, there should be a move towards publicly available, evidence-based decisions on the use of AI in healthcare. Ideally this would include evidence for effectiveness and lack of harm, and assurance that the technology does not discriminate against marginalised groups and those with protected characteristics.

The MHRA has oversight of regulation of AI-based medical devices. As outlined in answer to question 1, increased post-marketing surveillance and a standard quantified error recording system would allow improved evidence-based scrutiny of AI-based medical device performance and facilitate the review (and challenge) of decisions on use of AI in healthcare going forwards.

Day-to-day decisions influenced by AI-based medical devices (e.g., individual diagnoses) should be open to review and challenge by healthcare professionals, patients and carers based on awareness of the benefits and limitations of the relevant technology. To facilitate this, training the healthcare workforce is critical (see section 1.4),[56],[57] as healthcare professionals are crucial end-users of many AI-based technologies with the opportunity to review and scrutinise them day-to-day. They will also play a key role in explaining their use to patients, carers, and the public.

 

  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

Medical devices, including those that use AI, are, and should continue to be, regulated by the MHRA. As described in previous questions, over the past two years the MHRA has outlined a roadmap for regulating AI-based medical devices, which we welcome.[58] The MHRA should continue to work jointly with organisations such as the HRA in regulation of research and development of AI-based medical devices for healthcare.[59] Alongside the MHRA roadmap, NICE is playing a leading role in the development of a Multi-Agency Advisory Service for AI regulation, which is a service to help developers navigate the medical device regulation framework.[60]

Below, we outline specific considerations for: 1) the development, deployment and use of AI-based health technologies as medical devices; and 2) the use of AI-based health technologies, which may not be classified as medical devices, in wider society.

4.1            Use of AI in medical devices

AI-based technologies represent a special class of medical devices, and there is currently significant international debate of practical and ethical issues surrounding their use and deployment.[61] Strong regulatory frameworks will be important to enable the UK to navigate successful deployment of AI in healthcare.

The MHRA’s roadmap provides a strong basis for the regulation of AI-based medical devices.[62] As explored in question 1 (see 1.3), many of the Academy’s recommendations have already been incorporated into this roadmap.[63] In particular, the roadmap outlines robust measures for data privacy, protection and security. There is also a useful focus on scientific validation and clinical performance evaluation, which are key regulatory concerns of AI-based health technologies (as discussed in question 1). Further steps should be taken by the MHRA to ensure post-marketing surveillance of AI-based medical devices, to allow quantitative monitoring of AI healthcare algorithm errors and utilisation of updated clinical evidence.

4.2            Use of AI outside of medical devices

As mentioned in the answer to question 1 (see 1.5), not all AI-based health technologies are classed as medical devices and regulated by the MHRA. An ever-increasing number of commercially available health apps and wearable sensors (including those using AI) pose a risk of harm to the public.[64] However, the question of how they should be regulated is a difficult one as it would be a significant challenge to assess them all.[65] Organisations such as ORCHA assesses a subsection of health apps (which may or may not be classified as medical devices) and provides guidance, both to developers and end-users, working closely with UK regulatory agencies such as NICE.[66] Building on and reinforcing such mechanisms for assessment and advice will be essential to safeguard the public from potential harm from commercially available health apps and wearable technologies, including those that are AI-based. This might include ensuring organisations like ORCHA have the necessary in-house expertise to assess AI-based health technologies.

 

  1. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

Any further legislation on AI in health should be light touch and should focus on enduring core principles, to avoid stifling research and innovation in this fast-moving field. This was also highlighted by responses to the MHRA consultation on the future of medical devices regulation.[67] Legislation should be supported with detailed regulatory standards and guidance that are systematically reviewed and updated as data-driven AI-based health technologies evolve and progress. As recognised in the MHRA roadmap, legislation and regulatory guidance should also be harmonised with international regulation as far as possible, to reduce divergence and ensure relevance to global markets.[68] Regulatory harmonisation provides a strong platform for international collaboration and commercialisation in health research. It is crucial that our regulatory systems continue to enable this collaboration and avoid creating unnecessary barriers to innovation.[69] One relevant international programme is ‘STANDING Together’, which is developing a programme for ensuring data inclusivity and AI generalisability.[70] The UK’s departure from the EU may offer some opportunities for the MHRA to be innovative and flexible in the way in which it regulates medical devices, to streamline and improve processes and reduce the time taken for patients to access safe and effective AI-based health technologies, while maintaining strong regulatory standards in the UK.[71] It will be necessary to balance these potential benefits with the importance of harmonisation with international regulations to provide continuity for, and alleviate further regulatory burdens on, academia and business.

Finally, as mentioned in previous sections (1.5, 4.2) many commercially available AI-based health technologies (those not classed as medical devices) are currently minimally regulated. The inquiry should consider how current governance and guidance might be improved to ensure safe use of commercial AI-based health technologies. Other than legislation, app stores and online repositories can be a useful tool for appraisal and raising awareness.[72]

 

  1. What lessons, if any, can the UK learn from other countries on AI governance?

As highlighted in question 5, it will be important to harmonise the governance of AI internationally where possible. The MHRA has established communication with organisations in other countries developing AI governance such as the FDA and Health Canada.[73] Other examples of governance that the UK should consider include the US and the EU.[74],[75],[76]

 

 

(November 2022)

 

 


[1] https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency/about/our-governance

[2] The Academy of Medical Sciences (2021). Response to All Party Parliamentary Group on Access to Medicines and Medical Devices

[3] The Academy of Medical Sciences (2021). Response to All Party Parliamentary Group on Access to Medicines and Medical Devices

[4] The Academy of Medical Sciences (2022). Response to the MHRA’s consultation on legislative changes for clinical trials

[5] Medicines and Healthcare products Regulatory Agency (2022). Software and AI as a Medical Device Change Programme - Roadmap

[6] AI algorithms often are perceived as black boxes making inexplicable decisions. Explainability is the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level. Certain classes of algorithms, including more traditional machine learning algorithms, tend to be more readily explainable, while potentially performing worse. Others, such as deep learning systems, while performing better, remain much harder to explain. Improving the ability to explain AI systems remains an area of active research.

[7] https://www.clinical-trials.ai/

[8] The Academy of Medical Sciences. FLIER scheme. https://acmedsci.ac.uk/grants-and-schemes/mentoring-and-other-schemes/FLIER

[9] The Academy of Medical Sciences (2019). Artificial intelligence and health

[10] The Academy of Medical Sciences (2019). Artificial intelligence and health

[11] The Academy of Medical Sciences (2019). Artificial intelligence and health

[12] Melo M, Maasch J & Fuente-Nunez C (2021). Accelerating antibiotic discovery through artificial intelligence. Commun Biol 4, 1050 https://doi.org/10.1038/s42003-021-02586-0

[13] The Academy of Medical Sciences (2017). Response to the House of Lords’ Artificial Intelligence Committee call for evidence

[14] The Academy of Medical Sciences (2018). Our data-driven future in healthcare

[15] Medicines and Healthcare products Regulatory Agency (2020). Medical devices: how to comply with the legal requirements in Great Britain

[16] HM Treasury (2022). Autumn Statement 2022

[17] Medicines and Healthcare products Regulatory Agency (2022). Software and AI as a Medical Device Change Programme - Roadmap

[18] National Institute for Health and Care Excellence (2022). Multi-agency advisory service (MAAS) for artificial intelligence (AI) and data-driven technologies

[19] Medicines and Healthcare products Regulatory Agency (2020). Medical devices: how to comply with the legal requirements in Great Britain

[20] Regulatory Horizons Council (2021). Report on Medical Devices

[21] The Academy of Medical Sciences (2014). Data in Safe Havens

[22] The Academy of Medical Sciences (2019). Artificial intelligence and health

[23] The Academy of Medical Sciences (2019). Artificial intelligence and health

[24] The Academy of Medical Sciences (2018). Our data-driven future in healthcare

[25] https://www.clinical-trials.ai/

[26] The Academy of Medical Sciences (2019). Artificial intelligence and health

[27] The Academy of Medical Sciences (2017). Response to the House of Lords’ Artificial Intelligence Committee call for evidence

[28] The Academy of Medical Sciences (2021). Response to the MHRA consultation on the future regulation of medical devices in the United Kingdom

[29] The Academy of Medical Sciences (2019). Artificial intelligence and health

[30] The Academy of Medical Sciences (2021). Response to the MHRA consultation on the future regulation of medical devices in the United Kingdom

[31] The Academy of Medical Sciences (2019). Artificial intelligence and health

[32] National Institute for Health and Care Research (2019). PPI (Patient and Public Involvement) resources for applicants to NIHR research programmes

[33] Medicines and Healthcare products Regulatory Agency (2020). Medical devices: how to comply with the legal requirements in Great Britain

[34] https://www.hra.nhs.uk/about-us/partnerships/

[35] Medicines and Healthcare products Regulatory Agency (2022). Software and AI as a Medical Device Change Programme - Roadmap

[36] Medicines and healthcare products Regulatory Agency (2022). Consultation on the future regulation of medical devices in the United Kingdom

[37] The Academy of Medical Sciences (2019). Artificial intelligence and health

[38] Leslie D, The Alan Turing Institute (2019). Understanding artificial intelligence ethics and safety

[39] NHS AI Lab and Health Education England (2022). Developing healthcare workers’ confidence in AI

[40] https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence/

[41] Magrabi F et al. (2019). Why is it so difficult to govern mobile apps in healthcare? doi: 10.1136/bmjhci-2019-100006

[42] The Academy of Medical Sciences (2014). Health apps: regulation and quality control

[43] Japan Agency for Medical Research and Development & The Academy of Medical Sciences (2020). UK-Japan Symposium on Data-Driven Health: Data strategies to predict risk, prevent and manage disease in individuals and populations

[44] https://www.orcha.co.uk/

[45] The Academy of Medical Sciences (2019). Artificial intelligence and health

[46] Medicines and Healthcare products Regulatory Agency (2022). Software and AI as a Medical Device Change Programme - Roadmap

[47] Amann, J, et al. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective BMC Medical Informatics Decision Making. doi: 10.1186/s12911-020-01332-6

[48] Medicines and Healthcare products Regulatory Agency (2022). Government response to consultation on the future regulation of medical devices in the United Kingdom

[49] Collins G, et al. (2021). Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence BMJ Open 11:e048008. doi: 10.1136/bmjopen-2020-048008

[50] https://www.clinical-trials.ai/

[51] The Academy of Medical Sciences (2019). Artificial intelligence and health

[52] The Academy of Medical Sciences (2017). Response to the House of Lords’ Artificial Intelligence Committee call for evidence

[53] NHS AI Lab and Health Education England (2022). Developing healthcare workers’ confidence in AI

[54] The Academy of Medical Sciences (2017). Response to the House of Lords’ Artificial Intelligence Committee call for evidence

[55] Fisser P & Strijker A (2019). Digital Literacy as Part of a New Curriculum for the Netherlands Handbook of Research on Media Literacy Research and Applications Across Disciplines

[56] The Academy of Medical Sciences (2019). Artificial intelligence and health

[57] The Academy of Medical Sciences (2018). New technologies that use patient data

[58] Medicines and Healthcare products Regulatory Agency (2022). Software and AI as a Medical Device Change Programme - Roadmap

[59] https://www.hra.nhs.uk/about-us/partnerships/

[60] National Institute for Clinical Excellence (2021). Multi-agency advisory service (MAAS) for artificial intelligence (AI) and data-driven technologies

[61] The European Commission (2021). Regulation on Artificial Intelligence (the EU AI Act)

[62] The Academy of Medical Sciences (2021). Response to the MHRA consultation on the future regulation of medical devices in the United Kingdom

[63] The Academy of Medical Sciences (2021). Response to the MHRA consultation on the future regulation of medical devices in the United Kingdom

[64] Magrabi F, et al. (2019). Why is it so difficult to govern mobile apps in healthcare? doi: 10.1136/bmjhci-2019-100006

[65] The Academy of Medical Sciences (2014). Health apps: regulation and quality control

[66] https://www.orcha.co.uk/

[67] Medicines and Healthcare products Regulatory Agency (2022). Government response to consultation on the future regulation of medical devices in the United Kingdom

[68] Academy of Medical Sciences (2021). Response to APPG on Access to Medicines and Medical Devices inquiry

[69] The Academy of Medical Sciences (2022). Response to the MHRA’s consultation on legislative changes for clinical trials

[70] https://www.datadiversity.org/

[71] The Academy of Medical Sciences (2021). Response to APPG on Access to Medicines and Medical Devices inquiry

[72] The Academy of Medical Sciences (2014). Health apps: regulation and quality control

[73] U.S. Food and Drug Administration (2021). Good Machine Learning Practice for Medical Device Development: Guiding Principles

[74] https://www.state.gov/artificial-intelligence/

[75] https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[76] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai