WRITTEN EVIDENCE SUBMITTED BY WYSA

GAI0093

About us

WYSA is the leading UK AI enabled mental health app, with the highest ORCHA rating for a NICE ESF tier 3a AI mental health app. It is one of just a small number of AI technologies designed to support mental health needs.

 

Summary

        We have experienced recent Improvements to the governance of AI digital healthcare technology. The addition of accreditations such as Digital Technology Assessment Criteria (DTAC)[1] and the NICE Evidence Standards Framework for Digital Health[2] (ESF) have brought welcome clarity to evidence of efficacy and safety of AI in mental health.

        Governance and ethics is a vital area for mental health technologies. Work is urgently needed to address important issues including the grey line that exists for AI mental health apps intended for “well-being” purposes.

        It is still early days in terms of standards creation for AI mental health technologies. Government, regulatory bodies and NHS policy makers and operational leads need to continue to engage in open discussion with AI health-tech companies to design appropriately evidenced standards that will build public trust and accelerate the adoption of AI in mental health services.

 

  1. How effective is current governance of AI in the UK?

As a healthcare and software service provider our experience of AI governance is within two areas:

        Governance applicable to mobile healthcare apps (e.g. the Medicines and Healthcare products regulatory Agency (MHRA))[3]

        Governance applicable to the delivery of mental health services (e.g. the NICE Evidence Standards Framework)

 

We have actively engaged in current governance processes to ensure our AI app is as compliant with current law and best practice as possible. We have seen improvements in the governance process for healthcare apps recently, through the addition of accreditations such as the NHS DTAC and the NICE ESF. These have helped to clarify key areas such as the evidence of the efficacy of our AI app and other products available. Evidencing efficacy, whilst adhering to the most rigorous safety standards available is key to building trust with the organisations that commission mobile AI products and the people who use services. These standards will go some way to enable continued uptake of AI solutions and enable the benefits and impact that Wysa’s AI can have on people’s wellbeing to be realised. It is positive to see further robust and sector-specific assessment planned and to be consulted on the development of assessment criteria for digitally enabled therapies (DET)[4]. For us, this shows a growing willingness of governing bodies to collaborate with AI providers to influence the regulatory discussion to ensure the most safe and effective application of technology possible.

 

One of the weaknesses we have experienced within current arrangements is the use of jargon and the broad application of definitions in the governance. This has been the case with medical device licensing, where the current definitions of what can be considered as ‘treatment’ and what can be considered as ‘self-help’ are not always clear-cut. Often patients under the care of NHS services use a combination of self-help resources alongside prescribable treatment; it is important that the governance is clear for the way that people use and deliver services. The current arrangement means that many AI app companies become confused by the regulation and inadvertently go unregulated. In addition to this some technologies seem to be able to avoid the governance process altogether, which can occur when the provision is via a website rather than an app. The grey line of definitions leads to some regulatory burden of supplying and submitting evidence for small product changes because of a lack of clarity on the criteria  required of being compliant in some areas. This grey line also impacts the confidence and consistency of understanding of decision makers and boards responsible for implementing AI solutions, resulting in slower adoption. For the current governance to be effective, resources need to be in place to support with clarity and confidence that providers are compliant with current legislation and best practice and that the interpretation of this is more consistent.

 

Research into the use of AI in the mental health care pathway is relatively new, with WYSA recently receiving the first NIHR AI Award[5] to explore this. AI is often a tool of delivering pre-existing, well evidenced mental health interventions such as computerised cognitive behavioural therapy (cCBT) and so research often needs to focus on the safety and efficacy of the AI itself as a delivery mechanism, rather than the efficacy of the intervention.

Our experience is that designing appropriate and timely evaluation studies have been beset by long delays caused by governance and difficulty in acquiring Clinical Investigation approvals. This is because much of the guidance and examples follow those of physical health interventions which often have a much clearer treatment outcome. Research delays have a knock on effect on the speed of acquiring MHRA/UKCA approvals, to ensure products are meeting the right standards. This makes compliance resource-intensive for AI products that want to ensure high standards, and encourages other products labelling as ‘well being’ technology, as opposed to ‘treatment’ technology, to avoid the process altogether.

 

At Wysa we are playing an active role in seeking joint working with regulatory bodies to update governance frameworks for the inclusion of the use of AI in our adult clinical work. We are engaging directly with the National NHS England Improving Access to Psychological Therapies (IAPT) Team to examine current guidance and best practice manuals for IAPT and working with legislators and governance bodies to update content via open and transparent working groups.

 

  1. What measures could make the use of AI more transparent and explainable to the public?

 

Publicly promote a clear message that Government and public bodies are providing leadership on ensuring ethical AI use within mental health and is investing in the skills required across all sectors to deliver its ambitions, increase public confidence and ultimately lead to improved adoption of AI in the sector[6]. This also involves leading a positive narrative around AI, what it means and what is does, especially in the context of healthcare settings where people can often worry that AI is replacing, rather than supporting, clinical care.

 

Explain the purpose of what regulatory frameworks are trying to achieve, the measures and standards they are using to assess AI and what products meet those standards (and what functions they provide) in an accessible way to the public. Ensure that people are able to understand and make informed choices between products that have equal functionality.

 

Provide forums for open discussion: Further working groups hosted by legislators and governing bodies which include health systems, providers, patients and clinicians would also go a long way toward improving dialogue and understanding in the area.

 

  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

It is more important from our perspective to focus on how bodies provide regulatory oversight and align. This can be strengthened by more stringent evidence requirements for the use of AI within a medical setting. The NICE ESF is a good starting point for the evidence requirement of AI within our field alongside the medical device licensing tools from UKCA and MHRA. However greater scope needs to be given within the documentation and regulation processes to include alternative uses of AI in healthcare, including patient-facing mental health.

 

  1. What lessons, if any, can the UK learn from other countries on AI governance?

 

For AI-enabled digital mental health technologies it is very early days in terms of standard creation and regulation globally so it is difficult to discern what lessons can be learned just yet; however, there has been some progress (and pitfalls) that can be highlighted. The sensitivity of information collected via e-Mental Health technologies presents ethical, data protection and privacy challenges, with sector regulation remaining unclear globally. Within the context of mental health and wellness, global lessons are quite difficult to come across for AI governance given how unregulated this space is; however, there are some advances being made in jurisdictions, including Australia and Canada, of novel e-Mental Health standards and application assessment frameworks that may point the way to safer, more culturally equitable, evidence-informed innovation. For example, The Australian Commission on Safety and Quality in Health Care published a National Safety and Quality Digital Mental Health Standards (NSQDMH; ISBN: 978-1-925948-74-5), which is a quality assurance mechanism that includes coverage in areas such as: Clinical and Technical Governance Standard, Partnering with Consumers Standard, and Model of Care Standard. [7] However, as noted above there is no mentioning specifically of AI-enabled technologies in that documentation.

 

A key conference called Computers, Privacy and Data Protection (CPDP) has not had panels involving AI-enabled technologies in mental health and wellbeing in previous years [8] ; however, a submission for next year’s conference has been made by WYSA, the AI-enabled mental health and wellbeing app with headquarters in India, USA and London, UK [9] working in partnership with The Australian Commission on Safety and Quality in Health Care alongside the Mental Health Commission of Canada (Access to Quality Mental Health Services) to join forces and ensure that Industry and standards come together making sure that products are built and delivered safely, that over-regulation is not a barrier for innovation, and to discuss what engagement strategies are required to bring Industry onboard.

 

AI ethics and governance is an extremely important area for mental health technologies and much more work is urgently needed in this space. Important points that need addressing globally within digital mental health, for example, include the grey line that exists for mobile AI-based digital mental health systems intended for "well-being" purposes. There is no clear governance established here yet. When it does come, which one should be adopted that provides ‘global credence’? In the psychiatric illness space and medical device world, as a Software as a Med Device (SAMD) different countries and standards are examining multiple guidelines and standards for AI-based systems (e.g., US FDA AI guidance for SAMD;              BSI 30440 upcoming (currently for public comment); UK standard for AI validation (for MHRA); ISO 23053- Framework for AI systems using ML etc.). There is a need for harmonisation and alignment of all these standards and guidelines across countries, or else it puts a huge burden operationally and creates many challenges for providers to know each of these and compile a superset of what will need to be done.

 

More broadly speaking beyond mental health in terms of countries arguably at the forefront of AI Ethics and governance includes Switzerland (e.g., Geneva hosts the AI for Good Global Summit) and has high investment security for research and other European countries dominate the rest of the Index’s top ten (e.g., The Netherlands, Sweden, Finland, Germany and Ireland). [10] More broadly speaking, the 2021 AI Readiness Index, which ranked 160 countries by how prepared their governments are to use AI in public services, researchers found that the USA is ranked first, followed by Singapore in second and the UK in third. [11] To what extent this can be understood within the specific context of mental health is unclear. For the governance of AI-enabled mental health technologies it will be important to try to strive toward creating standards that are as globally-minded as possible that can then be adapted nationally and so on. Any divergence in standards and regulations will make it very challenging for digital mental health companies to proceed efficiently, safely, ethically and legally. There is already divergence more broadly than mental health spaces [12] and many AI governance lessons post-GDPR that digital mental health will have to learn from (e.g., [13]). An additional challenge that is related to AI Governance in the UK, albeit not directly, is that according to “...a new study from technology and services giant IBM suggests that far from being in a leadership pack dominated by the US and China, the UK is falling behind its peers in actively deploying AI” and that “The main reason for the UK’s poor performance is a lack of AI skills and associated business knowledge, says IBM. A survey of over 7,500 IT decision-makers globally was conducted by Morning Consult in April for the IBM Global AI Adoption Index. It found 38% of UK respondents citing this skills gap as the main inhibitor to AI adoption, compared with an average of 28% across Europe and just 24% in France“ and that “there is now a stronger trend towards upskilling, three in five firms [60%] reported that employees had undertaken internal or external AI training in the last year. But sadly, only a quarter of those firms offer training on ethics and AI.” [14]

 


[1] https://transform.england.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/

[2] https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies

[3] https://www.gov.uk/guidance/regulating-medical-devices-in-the-uk

[4] https://www.abhi.org.uk/resource-hub/file/12878

[5] https://www.nihr.ac.uk/news/new-wave-of-ai-technologies-in-36-million-funding-boost/27867

[6] https://diginomica.com/ibm-poor-ai-skills-undermining-uk-leadership-ambitions#:~:text=The%20UK%20is%20ranked%20third,in%20FinTech%20and%20other%20areas.

[7] https://www.safetyandquality.gov.au/standards/national-safety-and-quality-digital-mental-health-standards

[8] https://www.cpdpconferences.net/CPDP2022.pdf

[9] https://techcrunch.com/2021/05/21/mental-health-app-wysa-raises-5-5m-for-emotionally-intelligent-ai/?guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAM4L0-Vxv3PO_qycGJ6niP79bz79VFzOYuHXJlJIzlht4xjR0bgHoNJLbDqD22om_ARh2gZ1nEIKWEvpPapyjiFmRvomYTA2MNOSZjQ4DbuQ5dLCWfUpW6CKcy1PMsirsFXV5OsunkUSUsAawyrZEnnC0_GC9RuNBkk0u3mGyxHD&_guc_consent_skip=1669381965

 

[10] https://www.investmentmonitor.ai/ai/ai-index-us-china-artificial-intelligence

[11] https://www.oxfordinsights.com/government-ai-readiness-index2021

[12] https://www.oii.ox.ac.uk/news-events/news/the-eu-and-the-us-two-different-approaches-to-ai-governance/

[13] https://cadmus.eui.eu/handle/1814/64146

[14] https://diginomica.com/ibm-poor-ai-skills-undermining-uk-leadership-ambitions#:~:text=The%20UK%20is%20ranked%20third,in%20FinTech%20and%20other%20areas

 

 

(November 2022)