Written Evidence Submitted by medConfidential

(GAI0011)

 

  1. medConfidential has seen many examples across the NHS of decision-making around AI. This has involved very little active malice, but a great deal of well meaning incompetence, as well as some active greed,[1] and a lot of ego.[2] In the main part it has been a case of good people, believing they are doing a good thing, who – as some technical types are wont to do – don’t understand that things may be a little more complicated than they initially think,[3] and that some existing rules do apply.

 

  1. As with many things, the problems largely boil down to incentives – and that often means money.

 

  1. The approach we therefore have come to recommend is to focus on procurement: anyone offering an AI system for purchase by the public sector or the NHS should be required to state and evidence the datasets upon which they trained their system; the approvals they had for that training, from data providers, ethics boards, etc.; and the mitigations they have implemented (if any) with regard to bias, to ensure good governance, etc.

 

  1. These statements, as part of a procurement process that penalises lack of disclosure, will encourage legitimate organisations to respect best practices – noting that such practices will change over time – and should discourage companies from taking shortcuts, or promising in press releases what they cannot then deliver under contract.[4]

 

  1. Almost all of the problems we have seen with AI data projects have been where people’s strongest incentives were to be first, rather than to ‘tick boxes’ – with the belief that being first would bring them glory.

 

  1. Google DeepMind and the Royal Free together got themselves into a place where their project team seemed to believe there were no rules around creating a medical device for active use in a hospital.[5] (There are.) Eventually, Google DeepMind closed that business division as it didn’t make money. It is still unclear whether they deleted all the data.

 

  1. Another instructive example is the company Orth.AI – since renamed Naitive Technologies – whose Directors,[6] in 2018, claimed to have a large cache of NHS data.[7] A Freedom of Information request to the relevant hospital showed it had no data agreement in place.[8] The company appeared to cease operating shortly after that FoI request went in, but it was subsequently renamed, and has appointed new Directors. Perhaps more disturbingly, NHS Digital’s 2022 audit of the Royal National Orthopaedic Hospital NHS Trust [9] shows that the data handling practices which permitted a doctor to copy gigabytes of patients’ data to a hard drive and walk out of the hospital with it remain sufficiently unchanged to make this still possible.

 

  1. The safeguard against such companies and activities is not “more governance” or “ethics”. Rather, there must be no way that such companies will be able to make any money if they have taken short cuts. If companies are allowed to (continue to) believe that it is more profitable to be first than it is to be ethical and to follow proper process, then there will be an endless parade of disasters – because that is what the money will incentivise.

 

  1. It is worth noting that as soon as Orth.AI’s funders – which at the time appeared to be Microsoft UK [10] – realised that the company’s IP had been polluted by what amounted to digital theft, they walked away.

 

  1. Whatever legal frameworks, transparency / explanation, scrutiny and governance are deemed necessary for AI, the approach to data use that procurement rules should incentivise must be: shortcuts aren’t profitable.

 

  1. We have written previously on what public bodies (and indeed everyone) buying AI and Machine Learning products need to know when they are procuring such items – the ‘ethical shorthand’ for which could be thought of as the AI equivalent of “This was not tested on animals” for food or cosmetics, i.e. “No data subject was harmed in the making of this AI”.

 

  1. In an attempt to show ‘what good might look like’ or, at a bare minimum what adequate might look like, in 2020 medConfidential produced a mockup of what we call an ‘Analysis and Inputs’ report [11] – in essence, a certificate showing the data / datasets and procedures used to train a health ML model in an NHS data TRE.

 

  1. Principles are fine and necessary, but in practice good governance requires evidence – and evidence can and must be written down.

 

(November 2022)


[1] See Orth.AI example later in paragraph 7.

[2] e.g. Royal Free Hospital and Google DeepMind.

[3] c.f https://www.phc.ox.ac.uk/publications/512929

[4] We look forward to reading the submissions the Committee receives about the “successes” of AI / ML modelling in the pandemic, given claims made by some at the time about things ‘only big tech could do’. See also: https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-

hospital-diagnosis-pandemic/ 

[5] We provide a detailed timeline of the Google DeepMind / RFH debacle here: https://medconfidential.org/

whats-the-story/health-data-ai-and-google-deepmind/

[6] https://find-and-update.company-information.service.gov.uk/company/11145951/officers

[7] From a hospital at which several of Orth.AI’s Directors worked; one of whom gave the job title “NHS Trust Chief Executive”.

[8] https://www.whatdotheyknow.com/request/ai_agreements_with_orthai

[9] https://digital.nhs.uk/services/data-access-request-service-dars/data-sharing-audits/2022/post-audit-review-

rnoht

[10] The staff involved removed Microsoft UK references from their LinkedIn pages shortly after the company ceased operating, but at least one of them forgot to edit their Twitter bio...

[11] https://medconfidential.org/2020/analysis-and-inputs-reporting/