Written Evidence Submitted by GSK

(GAI0067)

About GSK

GSK is one of the UK’s leading biopharmaceutical companies, supplying 60m packs of our medicines and vaccines in the UK and investing £1bn in R&D here each year. Our strategy is focused on the science of the immune system, human genetics and advanced technologies like AI to prevent and treat disease with vaccines and medicines.

The development, use and deployment of AI technologies is a key element of GSK’s R&D strategy. AI is key to interpreting large genetic datasets, biological pathways, and networks, allowing us to understand the function of genes and develop transformative medicines. AI also has applications closer to the clinic in areas such as computational pathology. The growth of data in biology and medicine, in both volume and modality, requires AI to combine and harness this information to improve healthcare. Ultimately, AI will provide greater probability that the discovery and development of new medicines will be successful.

We are pleased to respond to the Commons’ Science and Technology Select Committee inquiry into the governance of artificial intelligence:

1.     How effective is current governance of AI in the UK? What are the current strengths and weaknesses of current arrangements, including for research?

The governance landscape for AI is still evolving in the UK. We welcome and support recent developments such as the National AI Strategy, the work of the Office for Artificial Intelligence in developing an AI governance roadmap and the recent DCMS policy paper on AI regulation. We look forward to continuing to work with the government and other stakeholders to create an effective, pro-innovation governance environment for AI/ML.

 

2.     What measures could make the use of AI more transparent and explainable to the public?

To ensure public trust and safety, it is critical that AI technologies that will impact on the delivery of clinical care undergo testing to demonstrate safety, reliability, and effectiveness, as is successfully done with medicines.

 

As set out in the DCMS policy paper on AI regulation, GSK would support development of cross sectoral principles to create a governance and regulatory framework for AI. If applied in a risk-based manner by sector specific regulators, such principles could help to build transparency around the governance of AI.

 

While we understand the need to include ‘appropriate explainability’ as a principle in regulatory approaches, it will be important to ensure this is not overly prescriptive and that sector specific regulators can determine how best this principle is implemented in practice. Being overly prescriptive on ‘explainability’, may even conflict with the desired outcome for that technology, such as diagnostic accuracy.

 

 

 

 

3.     How should decisions involving AI be reviewed and scrutinised in both public and private sectors? Are current options for challenging the use of AI adequate and, if not, how can they be improved?

How AI/ML is used is context specific and, even though AI/ML in the healthcare sector is based around health data, not everything should be deemed high-risk. Regulation must be able to make the distinction between technologies that would directly impact patient care (potentially higher risk) versus those used early in R&D to inform the development of future products that, in the case of medicines or vaccines, would go on to be rigorously tested in clinical trials (potentially lower risk).

 

Where AI/ML technologies and products have an impact on the public or patients, sector specific regulators such as the MHRA should lead the review and scrutiny of these technologies. Where AI technology is deployed to support internal business decision making, such as in the early stages of the drug discovery process, businesses should have their own governance in place to provide assurance.

 

4.     How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

We welcome the UK Government’s proposed pro-innovation approach as set out in the recent DCMS policy paper on AI regulation. We agree that developing cross-sector principles would be constructive in shaping an overarching governance framework. But recommend that a context-driven approach is taken, where sector-specific regulators such as the MHRA, are empowered to weigh risks across the cross-sectoral principles and determine how these are implemented. This would ensure a regulatory approach that balances the essential need for safety with the need to foster innovation. For example, in the healthcare context, for AI as with medicines, the need to demonstrate safety and effectiveness is a more stringent safeguard than the need demonstrate mechanism of action or ‘explainability’.

 

To keep pace with and support responsible innovation, and not inhibit it, regulatory mechanisms should focus on safety and the outcomes that technology is trying to achieve, rather than looking to create rules around the specific processes, techniques, or methodologies that should be followed. Both the development and implementation of such regulation around AI should be supported by significant opportunities for interactive discussions with industry.

 

Regulations should be underpinned by common standards. There remains a lack of global standards for development of AI/ML technologies, e.g., Good Machine Learning Practice (GMLP) standards for health-related AI/ML technologies (akin to Good Clinical Practice for clinical trials, or Good Manufacturing Practice). Technical standards establish best practice for how new technologies should be developed, and in turn inform regulation and support harmonisation. In 2021, the MHRA, FDA and Health Canada published guiding principles for the development of GMLP[1]. While these efforts set a positive direction, they are still at the ‘principle’ level. We would encourage the development of detailed GMLP standards in collaboration with other leading regulators.

 

We would encourage investment in the development of AI expertise among health regulators to ensure they have the required capability to regulate in this space effectively and innovatively.

 

5.     To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose? Is more legislation or better guidance required?

The development and deployment of AI is a highly iterative and fast-moving field. As set out in the DCMS policy paper on AI regulation, we welcome moves to issue regulatory guidance and not jump straight to legislation. Regulatory sandboxes are also an important way for regulators and industry to come together in controlled environments to test and learn how best to harness these innovations, with a view to shaping future regulatory frameworks.

Regarding intellectual property, further clarity and legal certainty is needed on the ability to patent biopharmaceutical inventions created with AI, regardless of the role of the AI, to incentivise the continuing use of AI and development of AI-devised inventions. In GSK’s view, whether a patent is granted for an invention created through use of AI should be based solely on the normal substantive tests (e.g., novelty, non-obviousness, etc.), regardless of the contribution of the AI to the invention. GSK notes that under UK copyright law a computer-generated work can attract copyright by deeming that the authors are the person(s) who put in place the arrangements for the work to be created. If there is a need or desire to retain the requirement for naming humans as inventors for patenting purposes, a similar concept may be considered for AI-invented inventions. We suggest that the UK Government continue to work with stakeholders to inform how legal frameworks are developed and implemented.

6.     What lessons, if any, can the UK learn from other countries on AI governance

We support the UK’s approach to ensure that the governance of AI/ML is pro-innovation. This is not necessarily the case elsewhere. GSK would support an approach that seeks to harmonise appropriate and measured global regulatory standards for AI technologies, such as the development of Good Machine Learning Practice. Having common standards to work to would allow global companies to maximise their impact for patients.

 

(November 2022)


[1] Good Machine Learning Practice for Medical Device Development: Guiding Principles - GOV.UK (www.gov.uk)