Written Evidence Submitted by UK BioIndustry Association

(GAI0026)

Key summary points

 

Introduction to the use of AI in the life science sector

The UK’s R&D-intensive life sciences sector is universally recognised as world-leading, and it delivers great benefits to the economy, the health of the nation, and is key to the Government’s net-zero agenda. This is a growing sector of the future that presents a unique opportunity. The life sciences industry employs 268,000 people across the UK. There are 6,330 life sciences businesses, 85% of which are SMEs, and combined they generate a turnover of £88.9bn. [1] From improving patients’ lives through new treatments and digital healthcare, to the development of environmentally sustainable technologies, such as biological fossil fuel substitutes and biodegradable bioplastics, the sector’s deep understanding of biology is helping to address humankind’s greatest challenges.

AI technology has the potential to revolutionise the healthcare industry and many other biology-based technology sectors.  It is becoming increasingly important in the mission of speeding up the development timeline for much-needed medicines, medical devices and diagnostics, lowering the costs of R&D in the sector, selecting those patients who will benefit most from the treatment and making diagnostics more accurate. It is SMEs which are at the forefront of AI innovation, developing technologies that can be harnessed by the NHS and traditional pharmaceutical companies.

The different uses of AI across the life sciences sector comes with different applications and therefore risk profiles. The life science industry is therefore in favour of a sector-based, risk-based approach to regulation, which allows for innovation while protecting the rights of patients and consumers at most risk.

 

Introduction to the current regulation of AI in the life sciences

Medical devices

Uses of AI in the life sciences which have the greatest impact on the public are regulated by the Medicines and Healthcare Regulatory authority (MHRA). The MHRA regulates medical AI as ‘software as a medical device (SaMD). The MHRA is continuing to publish details on how SaMD will be regulated in the UK post Brexit. It has consulted on changes to the regulatory framework as part of its Software and AI as a Medical Device Change Programme and the new legislative framework for medical devices in the UK. The response to the consultation on medical devices outlined the Government’s decision not to include a specific definition of AI as a medical device in the new regulations. The MHRA has also decided not to set specific legal requirements for AI beyond those being considered for SaMD as it does not want to be overly prescriptive. Instead, the MHRA intends to publish relevant guidance. AI used in the life science sector which is not currently regulated by the MHRA, for example in early drug discovery that does not fall into the definition of medical devices[2], is likely to remain unregulated by them. As discussed later, we believe that this is the right approach, subject to adequate guidance.

Software that is regulated as SaMD by the MHRA must have a medical purpose, examples include: prevention of disease, diagnosis or monitoring of disease or injury, and monitoring or treatment of disease. More specifically, the MHRA is proposing to use the following definition of ‘software’ in the UK medical devices regulation (which would include AI): “A set of instructions that processes input data and creates output data”. Given this broad definition, many applications used in healthcare, including an excel spreadsheet, could be considered as SaMD.

Data protection and privacy

The use of personal data in research is currently subject to several layers of governance and regulation. This includes but is not limited to ethical review, data impact assessments, common law duty of confidentiality and the UK general data protection regulation (GDPR). There are also existing provisions for automated decision making under the UK GDPR (article 22). This gives people the right not to be subject to solely automated decision making[3]. Therefore, there exists a range of governance to protect the individual from the use of personal data and automated decision-making. These measures ensure that the public interest is protected through robust review of ethics and privacy. Innovators that work with AI trained on personal data will already need to comply with these different layers of regulation. These regulatory requirements represent a significant burden for SMEs, as the data governance framework is complex and time consuming to navigate. Therefore, any additional regulation would stifle innovation in small to medium companies.

Professional and product liability


The UK has a strong consumer protection framework, which protects consumers from harm caused by products. Any software product sold in the UK, including those using AI would be covered by this law. In addition, any professional using or applying AI would be subject to professional liability should professional negligence be found. Therefore, although not specific to AI, there exists a variety of frameworks to protect the interests of patients, the public and consumers from risks associated with AI.

 

The effectiveness of current AI governance in the UK

The key benefit to the current AI governance framework is that innovative uses of AI to support basic biomedical research are allowed to operate without any additional regulation. For example, uses of AI to simulate laboratory experiments or in early drug discovery (case study 1), which pose a low risk to the public, face appropriate regulation. Any research using personal data will be subject to appropriate data protection rules as outlined above. This pro-innovation approach means that innovators can develop and experiment with AI without the need to navigate unnecessary regulation where there will be no direct impact on people. Alternatively, uses of AI which pose more risk to the public, such as in the diagnosis of cancer (case study 2) are regulated robustly by an existing regulator, the MHRA. This application-based approach means that uses of AI in the life sciences are robustly regulated without impeding research and innovation, we are therefore in favour of this approach.

One of the main drawbacks in the current approach to regulating AI are the lack of clarity about what falls under the current definition of SaMD. The extent to which UK laws apply to AI is often a matter of interpretation, making them hard to navigate. This is particularly an issue for smaller businesses which may not have adequate legal support. To address this, clearer guidance on what qualifies as SaMD and what doesn’t should be available with support for innovators who need to navigate this framework. Robust regulatory approval is needed by industry to bring products to markets safety and successfully.

The current approach also comes with a risk that new innovative techniques fall outside this regulation. As this is a fast-evolving field, there is a risk that new technologies come online in the future that do pose a risk to the public, that fall outside current regulation. This also demonstrates the need for clear guidance that is reviewed periodically, or covers technologies based on risk to the public, rather than specific or limiting definitions.

There are fears that the current regulators do not have the expertise to review and scrutinise the use of AI. Therefore, if regulation is to be kept at a sector level, regulators should be given adequate resource to robustly review these new technologies. In addition, sector level regulators should be working together to solve common challenges and share expertise. This exchange of experience and knowledge should facilitate some standardisation between sector level regulation.

Another drawback is that the perceived lack of regulation creates a lack of trust among the wider public and other stakeholders. This perception of AI technologies being unregulated and therefore a risk to the safety and well-being of the public may lead to future resistance and concern. We would therefore advocate that any efforts to improve the governance of AI come with guidance on communication and public engagement. As discussed below, appropriate human oversight should always be built into the application of AI. This human oversight and the explainability of AI should be communicated appropriately.

Finally, regulators need to ensure that any regulation or guidance accounts for continuously training models. Continuously training models are coded to develop and learn continuously. This makes them very difficult to regulate. Conversely, other AI algorithms can be trained on a training set and ‘locked down’ to make decisions based solely on that data. These models can apply for approval based on efficacy and accuracy following this training. Special consideration and guidance on the use of continuously training models should be generated.

 

The future of AI regulation in the UK

Given its potential and ubiquity in life science innovation, we believe the use of AI should be regulated based on its application. Within this framework, uses of AI which pose any risk to the public should be appropriately regulated, while those which are unlikely to have any impact on an individual’s health, safety or freedom should remain unregulated. For example, an AI algorithm which influences clinical decision making or recruitment onto clinical trials should be appropriately regulated at a sector level. For all other uses of AI within the life sciences sector, specific guidance will provide sufficient assurance. This approach will ensure that innovation is not stifled, while protecting the public interest. The current legal framework provides adequate assurance of privacy and safety within the life sciences.

Given the standing, experience and respect the MHRA has within the sector and internationally, we would also advocate that regulation remain within its auspices. However, we do see a lack of guidance on what technologies are covered by existing regulations. We would therefore welcome revised guidance, that provides innovators with clarity on what they need to do. This guidance should consider the evolving nature of these technologies and should therefore avoid being too narrow or restrictive in definitions. We would also call for adequate resource and expertise to ensure AI technologies are given the robust safety review they require.

 

The review and scrutiny of AI decision making

To accurately review or scrutinise the outputs of an AI algorithm it is important to understand how the AI algorithm came to that output. This means that developers or deployers of the code can explain what parts of any input data have led to which aspects of any outputs or decisions. AI algorithms do not make decisions in the same way that a human would; with the use of common sense or rational reasoning. In the example of Jiva.ai (case study 2), the innovators can explain how the algorithm reaches its decision based on parts of the radiology image and based on the training data set used. This means that any spurious results can be dissected by comparing input data with output data. However, some AI algorithms may be developed and deployed with less understanding of how outputs were generated. In all instances, a risk-based approach to review and scrutiny should be used. This means that any uses that carry a high risk to the public should come with an in-built human review process. Once again, those uses of AI that are low risk, such as in experimental design, need only have human review built in at the user’s discretion. Many organisations will plan human oversight as a standard to ensure that the outputs of AI do not negatively impact their work.

 

The BioIndustry Association

  1. The BioIndustry Association (BIA) is the trade association for innovative life sciences in the UK. Our goal is to secure the UK's position as a global hub and as the best location for innovative research and commercialisation, enabling our world-leading research base to deliver healthcare solutions that can truly make a difference to people's lives.
  2. Our members include:
  1. The BIA’s members are at the forefront of innovative scientific developments targeting areas of unmet medical need. This innovation leads to better outcomes for patients, to the development of the knowledge-based economy and to economic growth. Many of our members are small, pre-revenue companies operating at the translation interface between academia and commercialisation.
  2. We have a growing number of members working at the interface of patient and health data and innovation, including using analytic models, machine learning and AI. The ability to work on cutting edge innovations with minimal bureaucracy is of real importance to these companies.

 

(November 2022)


[1]               OLS (2021), Bioscience and health technology sector statistics 2020: https://www.gov.uk/government/statistics/bioscience-and-health-technology-sector-statistics-2020 

[2] Medical devices: software applications, MHRA: https://www.gov.uk/government/publications/medical-devices-software-applications-apps

[3] The information commissioners office: Article 22 on automated decision making: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-does-the-uk-gdpr-say-about-automated-decision-making-and-profiling/