Written Evidence Submitted by the Multi-agency advice service (MAAS)




The Multi-Agency Advice Service (MAAS) is a project overseen and funded by the NHS AI Lab, part of NHS Transformation Directorate. The MAAS is a collaboration between arm's length bodies within the health and social care sector:



The MAAS partners are building a cross-regulatory advice service for developers and adopters of digital, data-driven and AI technologies (henceforth referred to as AI) in health and care settings. The service provides a single platform for advice and guidance to support developers and adopters to navigate regulatory and evaluation pathways and acts as a ‘single front door’. The collaboration also brings together the four agencies to address shared AI policy challenges with the intention of supporting high quality AI use in health and care and making the UK attractive to innovative technologies. The digital platform is currently in private beta testing, with public launch scheduled for early 2023. Access to the platform is available to the committee upon request.  This is a joint response developed by the MAAS partners to share information and insights gathered through the development of the MAAS to date. Please note that while each partner organisation may have a broader remit and scope, this response focuses on those areas of regulation of AI in health and social care settings that are within the remit of the MAAS.


How effective is current governance of AI in the UK? 


Current governance is broadly effective for the use of AI in health and social care now, but the risks and challenges that AI can poses cross multiple regulators and require a diverse set of skills to address, meaning coordination is key to getting regulation of AI right in health and social care. This ecosystem of regulators and regulation along with MAAS allows us to effectively manage the risks AI can pose and realise its benefits.


Similar to the regulation of medicines, governance of AI within health and care settings falls across multiple bodies. The HRA oversees health research approval for patient data used in AI products, the MHRA regulate the manufacturers and AI products placed on the market (where they qualify as medical devices), CQC oversee regulated activities and NICE evaluates products and services on clinical and cost-effectiveness to produce recommendations and guidance to the NHS and social care. Multiple pieces of legislation apply to AI in health and social care, including the Medical Devices Regulations 2002 (as amended) and the Health and Social Care Act 2008 (Regulated Activities) Regulations 2014. The governance of AI being spread across multiple bodies and pieces of legislation is warranted, allowing specialisation and tailoring or regulation to address key risks. The MAAS addresses the challenges to coordination that this might pose.


The MAAS project was established and funded by the NHS AI Lab in 2020 to facilitate such coordination, ensure that risks AI can pose to the safety and quality of care are addressed, improve public understanding of relevant regulation, and address the wider difficulties in bringing innovative and effective products to the market to improve health and social care. Research undertaken by MAAS showed developers and adopters were concerned about lack of clarity and complexity in the system. The MAAS digital platform supports adopters and developers in understanding requirements and navigating the system, and early evidence from our independent evaluation gives positive feedback from beta testers.


Bodies involved in regulating use of AI in health and social care have different geographical remits, e.g. CQC only regulates in England, and the Health Research Authority only covers England and Wales. Therefore, regulatory pathways differ depending on where the AI is being developed or used, with developers and adopters sometimes having to address different requirements as they work across borders and oversight bodies needing to collaborate to manage risk effectively.


Sub-question: What are the current strengths and weaknesses of current arrangements, including for research? 


Strengths of current arrangements for the use of AI in health and care settings:


a)      In most cases, use of AI within health and care is regulated within existing frameworks for medical technologies;

b)      For research, the Health Research Authority already accept and review research applications involving AI and digital technologies within the NHS;

c)       The MAAS digital platform (currently in beta testing) provides a single source of guidance on regulations and evaluation of AI for developers and adopters. This brings together guidance on requirements from more than just the core four MAAS partners, including: Information Commissioner’s Office (ICO); National Data Guardian; NHS England; NHS Digital; UK National Screening Committees (based in UK Health Security Agency); Equalities and Human Rights Commission (EHRC);

d)      The MAAS partnership has accelerated collaboration to allow regulators to identify, communicate about and address risks of AI rapidly and effectively, and to identify and address challenges in the regulatory pathway, thereby fostering responsible innovation;

e)      The MHRA and medical device regulation already apply to AI medical devices and primarily regards the safety of those products. Secondary legislation to modernise the UK Medical Device Regulations 2002 for software and AI is currently being drafted. Changes requiring legislation are minimal and detailed in the Government response to consultation on the future regulation of medical devices in the United Kingdom. In tandem, the Software and AI as a Medical Device Change Programme will bring forward 33 deliverables across 11 work packages to ensure such legislative change is supported with robust guidance and streamlined processes.


Weaknesses in current arrangements for the use of AI in health and social care settings:


f)        While the involvement of multiple regulatory and oversight bodies has many strengths, it also creates challenges for new entrants to the sector, and health and care staff to understand requirements, challenges that MAAS addresses;

g)       Where there are multiple oversight bodies, different perspectives can occur on matters that cut across remits. The MAAS is working to facilitate discussions between bodies, identify workable solutions and provide clarity to leverage the specialist expertise of each regulatory body.

h)      Challenges ensuring the right information can be and is shared between oversight bodies at the right time to ensure risk is identified and responded to, and that people are kept safe;

i)        The methodologies and standards necessary to assure AI for safety and ensure the product meets its intended purpose are yet to settle, there being open data science and regulatory science question that require resolution. This limits progress in creating regulatory guidance and wider efforts for standardisation. Note this is not limited to the health and social care sector although regulators within the sector are collaborating with standards bodies, industry and Royal Colleges to ensure progress is made.

j)        The lack of quality, completeness, interoperability and accessibility of health and care data sets limit the effective development and improvement of AI technologies.



How should decisions involving AI be reviewed and scrutinised in both public and private sectors? 


Scrutiny of decisions involving AI in health and social care should include consideration of:








How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?  And To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose? Is more legislation or better guidance required? 


Existing legal and regulatory frameworks incorporating the aforementioned planned changes should permit the safe and ethical scalability of AI in health and social care. It is important for AI use oversight to be integrated with broader health and social care oversight such that AI-enabled health and social care is held to the same standards as other delivery methods. Continued support for programmes like MAAS going forward will help developers and adopters deliver the best possible impact on health and care and foster the UK as a supportive innovation environment. Other regulators with relevant cross-sector remits should also continue to be involved, including the EHRC and ICO.


Details on the current and future governance landscapes have been provided in other sections. It is the position of the MAAS partners that additional legal frameworks are not necessary for AI in health and social care as the current framework and in-flight changes are sufficient at this time. Furthermore, additional legislative oversight risks a detrimental impact on innovation by creating further complexities and alignment issues. This will work against the broader government plans to make the UK innovation friendly and will limit UK patients and public’s access to medical products and services.


6.    What lessons, if any, can the UK learn from other countries on AI governance? 


Medical device regulations have established systems in place to promote global alignment and information sharing between national regulatory bodies. Prior to Brexit, the UK interacted internationally through the European Commission. Post-Brexit the MHRA has become an independent full member of the International Medical Device Regulators Forum (IMDRF). The purpose of this forum is to accelerate international medical device regulatory convergence and publish consensus guidance on regulatory requirements in order to facilitate access to safe products in a globalised market. The MHRA takes an active role in both the Software as a Medical Device and AI Medical Device Working Groups of IMDRF. Underpinning this global healthcare regulatory forum is an established system for generating technical standards (e.g. the International Standards Organisation (ISO)). The UK makes significant contributions via its national standards body British Standards Institute (BSI). With respect to standards for medical devices, BSI work closely with the MHRA and BEIS to spot opportunities for standards development and align standards with UK legislation so manufacturers have the tools to demonstrate conformity. Additionally, the MHRA are members of other multilateral international projects ran by the World Health Organisation (WHO) and have strong bilateral links with other regulatory markets such as the US Food and Drug Administration (FDA) and Health Canada, capitalising on broad agreement to jointly publish Good Machine Learning Practice Principles for Medical Device Development.


Other jurisdictions are exploring legislation for AI across all sectors such as the EU AI Act or to enshrine principles in law such as with the AI Bill of Rights proposed in the US. The UK is learning from these approaches, including the difficulties and concerns being raised. Most notably, there is the risk of duplication and additional burden on developers of medical device products, which will delay market access and increase costs to innovative products. The capacity of the assessment bodies within the UK and EU systems is already strained, and additional legislative requirements risks further exacerbating these issues and leading to longer waiting times.


MAAS partners are aware of a number of initiatives and approaches to AI governance in other countries. We do not have a robust assessment of their effectiveness, but the Committee may wish to consider:







(November 2022)