Written Evidence Submitted by The Nutrition Society



The Nutrition Society is the largest learned society for nutrition in Europe, dedicated to its mission of advancing the scientific study of nutrition and its application to the maintenance of human and animal health’. Given its diverse community with the independence and courage to challenge, question and progress forward the field of nutrition, the Society welcomes the launch of an inquiry into the Governance of artificial intelligence (AI).


  1. How effective is current governance of AI in the UK?
    1. What are the current strengths and weaknesses of current arrangements, including for research?

Answer: It is difficult to gauge how effective current arrangements are for governance of AI in the UK because i) there is no specific legislation regulating AI, ii) current arrangements involve “a patchwork of legal and regulatory requirements built for other purposes which now also capture uses of AI technologies” 1 and iii) there has been no formal evaluation of current arrangements. The regulatory framework proposed by the UK Government is designed to be “proportionate, light-touch and forward-looking”. This may help to achieve the objective of keeping pace with the speed of developments in AI-based technologies (see also answer to Q4 below).

In research, AI is used increasingly at all stages of the research cycle from study design to publication and across a very diverse range of disciplines. This is bringing many advantages in respect of speed and cost-effectiveness of research. The weaknesses of current arrangements for governance of AI in research is similar to those in other areas of business and society. However, the issue of transparency (see below) has even greater potency in scientific research where evidence of robustness of findings and repeatability are crucial to making advances.

  1. What measures could make the use of AI more transparent and explainable to the public?

Answer: This is a key issue for the public, for businesses and for civil society because some uses of AI, in particular, machine learning means it is hard or impossible to know the reasons why an algorithm reached a particular decision. This is exacerbated when the algorithms in question are  proprietary. Consequently, there is uncertainty about who is responsible for any undesirable outcomes from such decisions. One possible solution would be to emulate the EU General Data Protection Regulation that is intended to provide a “right to explanation” about the reasons for AO-based decisions2.

  1. How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
    1. Are current options for challenging the use of AI adequate and, if not, how can they be improved?

Answer: The UK Government’s proposals outlined in the Policy paper “Establishing a pro-innovation approach to regulating AI” does not offer any specific guidance on how the public, businesses or other organisations can challenge the use of AI. With the decentralised model of governance oversight to be provided by existing regulators such as the ICO and MHRA, and the lack of new legislation (at least for the time being), it seems that any such challenges would need to employ existing pathways for making complaints. It remains to be seen whether they will be fit for purpose.

  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

Answer: The Government’s proposed pro-innovation framework for regulating AI is underpinned by a set of cross-sectoral principles including:

1. Ensure that AI is used safely

2. Ensure that AI is technically secure, and functions as designed

3. Make sure that AI is appropriately transparent and explainable

4. Embed considerations of fairness into AI

5. Define legal persons’ responsibility for AI governance

6. Clarify routes to redress or contestability

Appropriate implementation of these principles has considerable potential to ensure that AI is used for societal benefit but whether that is achieved will depend of the activities of the key regulators. Unlike proposals in some other jurisdictions, the UK does not propose to have a central regulator for AI. Instead the UK proposes that a number of existing regulators including the Information Commissioner's Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) will act as key regulators for AI in different sectors. Having 5 (or more) regulators responsible for AI has the potential advantage of flexibility and responsiveness allow the individual regulators to anticipate, and to respond to, AI-related developments and issues in their respective sectors. However, this approach carries significant risk because applications of AI may not “fit” neatly within the remit of any one of the regulators so some areas of activity may lack due oversight. In addition, there is a significant risk that processes, guidelines etc., developed by any one regulator may become non-aligned with those of other regulators. This may be exacerbated by the Government’s proposal that “regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance”.1 The proposed distributed model for AI governance may disadvantage businesses, universities and other organisation that operate across multiple sectors and who may face conflicting regulation. As with other areas of new technology, having guidelines/ codes of practice for regulation of AI that are similar to those of (mutually accepted by) our major trading partners will be essential to achieve the hoped for benefits for society.

In short, it is far from clear that the advantages of the distributed model of (de-centralised approach to) governance of AI outweigh the risks.

  1. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
    1. Is more legislation or better guidance required?

Answer: Given i) the great novelty and pervasive reach of AI applications across business and society, ii) the lack of transparency about outcomes from some uses of AI and iii) the considerable potential for unintended consequences, or indeed harms, from use (or deliberate misuse) of AI, reliance on a patchwork of existing governance bodies seems very risky. The public, businesses and other organisation would be more reassured by much greater coordination of governance of AI and by appropriate legal underpinning.

  1. What lessons, if any, can the UK learn from other countries on AI governance?

Answer: Arrangements for governance of AI are under development in several jurisdictions including those of the UK’s major trading partners. In most cases it is too early to know whether they offer lessons for the UK. However, it is clear already that the UK’s proposals for governance of AI differ significantly from those proposed by the EU3. Unlike the UK’s decentralised, light-touch approach, the EU plans to create a single regulator responsible for regulating AI across all sectors. In addition, this new regulator will have a new mandate and powers of enforcement. In the UK, there will be no a priori bans on any use of AI. In contrast, the proposed EU AI Act will prohibit certain AI practices (that are considered unacceptable in all circumstances) and identify high-risk AI systems that will be monitored closely.

As awareness of the uses, and especially the misuses, of AI become known more widely by the public and by businesses and other organisations, the apparently more rigorous governance arrangements being developed by the EU may provide greater reassurance that those proposed in the UK.

(November 2022)


1. UK Government  Policy paper “Establishing a pro-innovation approach to regulating AI” Updated 20 July 2022.

2. https://futureoflife.org/resource/ai-policy-challenges-and-recommendations/#Accountability {accessed 6 November 2022}

3. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence {accessed 6 November 2022}