Microsoftwritten evidence (LLM0087)


House of Lords Communications and Digital Select Committee inquiry: Large language models


The advance in AI may be the most important technological development of our lifetimes. From the UN Sustainable Development Goals to backlogs in public services, it can help solve some of the most pressing challenges in public policy.  In the NHS, AI is already helping to tackle backlogs by cutting waiting times for cancer patients and preparation time for clinicians by up to 90%. In classrooms, it can reduce the time teachers need to spend on marking homework. In the race to reach net zero emissions, AI will make a crucial difference. An example of this is where AI and high-performance computing is being used in wind turbine manufacturing to generate more energy from wind turbines. A single percentage point increase in efficiency in this context can mean billions of dollars in savings and greater renewable power output. AI can be a partner in everyday life, boosting productivity in the same way that mobile phones boosted communication. In shops, it can track and optimise stock levels – in one leading UK supermarket AI tools have helped reduced in-store shelf gaps by almost a third. AI is helping specialists to discover and develop new medicines, and has the potential to help us prepare for future emergencies, such as pandemics.


In order to realise the full potential of the benefits and opportunities presented by AI, we must also identify and find effective ways to collectively mitigate the possible risks. Therefore, governments around the world are rightly taking steps to consider and address this. The question is not now ‘when’ or ‘if’ but ‘how’ AI will affect citizens, businesses and consumers, and how to effectively regulate the technology without giving a false sense of security.  This starts with a foundation built on trust in our approach to responsible AI, as well as the implications of AI for privacy, security, and digital safety. It quickly extends more broadly, encompassing the impact of AI on inclusive growth and our natural environment. Almost every technological advancement – from the typewriter to the mobile phone and the internet - has led to changes in people’s lifestyles as well as the wider economy. To realise the potential of AI, we believe that governments should take two steps.



Capabilities and trends


1.              How will large language models develop over the next three years?

The coming years will likely see continued technological developments alongside ongoing adoption of AI across society to advance productivity and address major challenges. Much of this will be underpinned by continued investment in and utilisation of large models that can provide a platform for developers to build AI systems, including those with generative capabilities, to be used in a wide variety of scenarios. Some key trends in the medium term development of large models may include:





  1. Given the inherent uncertainty of forecasts in this area, what can be done to improve understanding of and confidence in future trajectories?


Alongside the rapid development of LLM capabilities over the last few years has been a growing understanding of the risks AI can pose and the need to take steps so that AI is developed and deployed responsibly. Leading AI labs, including Microsoft, recently launched the Frontier Model Forum[4] with a view to accelerating research into how to develop and use LLMs responsibly, alongside developing and implementing related best practice. Further investment into this type of research will be an important part of deepening understanding of LLMs and their opportunities and risks and the involvement of academia and civil society will be a critical element of success. Microsoft is supportive of greater investment in publicly available resources for researchers so that they can study large models more closely. To this end, Microsoft has launched the Accelerating Foundation Models Research program,[5] providing resources and partnership with researchers so that they can study AI models, including Microsoft’s, and deepen an understanding of how to build and use them for the benefit of society. Microsoft is also supportive of government investment in publicly available resources for researchers, for example the National AI Research Resource that has been proposed in the US.[6] We are in discussions in the US about how we can support such an initiative and would also welcome and support an extension of the NAIRR to accommodate access by academic institutions in allied nations like the United Kingdom, the European Union and Japan. A multilateral AI research resource would accelerate existing efforts to establish global norms and interoperable approaches to risk mitigation, including those underway in the U.S.-EU Trade and Technology Council and the G7. Greater information sharing about the capabilities and limitations of AI systems and transparency around how and where high risk AI systems are being used will also be an important part of improving understanding of AI and its impacts.


2.              What are the greatest opportunities and risks over the next three years?


Integrating large language models into applications will help maximise the potential of AI to solve society’s biggest challenges. The AI advances we are seeing now represent a new industrial revolution that will offer huge opportunity for everyone across the UK. We have seen AI help save individuals’ eyesight, make progress on new cures for cancer, generate new insights about drugs and proteins, and provide predictions to protect people from hazardous weather. The question is not ‘when’ or ‘if’, but ‘how’ AI will affect citizens, businesses, and consumers—and how to ensure this happens safely.


In the UK specifically, AI is helping to tackle NHS backlogs by cutting waiting times for cancer patients and preparation time for clinicians. It is also improving productivity of small and large businesses up and down the country, with over 430,000 companies in the UK today having adopted at least one form of AI technology. This opportunity is in its early stages, and the UK has many ingredients of a global leadership role, particularly through its excellent research capabilities.


However, we acknowledge that not all actors are well-intentioned or well-equipped to address the challenges that highly capable models present. Some actors will use AI as a weapon, not a tool, and others will underestimate the safety challenges that lie ahead. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs. Regulatory frameworks will need to guard against the intentional misuse of capable models to inflict harm, for example by attempting to identify and exploit cyber vulnerabilities at scale, or develop biohazardous materials, as well as the risks of harm by accident, for example if AI is used to manage large scale critical infrastructure without appropriate guardrails.


As multimodal models become utilized more broadly, there is also a particular risk around the use of AI to generate compelling audiovisual content, including of something that appears to be real but did not in fact take place - often referred to as “deepfakes”. Requiring companies to deploy state of the art provenance tools to help the public identify AI generated audiovisual content should be part of making progress here. At our annual developer conference, Build, Microsoft announced new media provenance capabilities. Using the C2PA specification from the Coalition for Content Provenance and Authenticity, we will mark and sign AI-generated images from Microsoft Designer and Bing Image Creator with metadata about their origin, enabling users to verify that images from those services were generated by AI. This marking will happen automatically, as part of the image generation process. We expect that the engineering of this system will be complete by the end of 2023. Microsoft is also a co-founder of the Coalition for Content Provenance and Authenticity, which addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history of media content.


Large scale AI models require access to data at scale in order to function correctly. Limiting the ability to use publicly available and otherwise legally accessed data for training AI models will lead to poorly performing AI, which is potentially unsafe, unethical and biased. It is therefore important that existing exceptions in copyright law clearly permit the training of AI systems, and intellectual property laws do not develop to prevent text and data mining. Performing text and data mining is not a copyright infringement and performing text and data mining on publicly available and legally accessed works should not require a licence. If licensing was required to train AI models on legally accessed data, this would be prohibitive and could shut down development of large scale AI models in the UK.


  1. How should we think about risk in this context?


With the opportunity and the potential risks at hand, we believe we must share what we have learned and help all organizations apply responsible AI practices to their work. That is precisely what we at Microsoft are doing, and we hope to lead by example. Since 2017, we have been playing a leading role in developing principles for responsible AI. We are committed to creating responsible AI from the drawing board. Our work is guided by six core principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are focused on how the AI systems of today can help people solve real-world challenges – acting as a co-pilot throughout everyday life.


Part of seizing this opportunity involves establishing a regulatory framework that places responsibility at its core. Given the broad and varied nature of how AI can be used, regulation should adopt a risk-based approach, focused on mitigating the risks posed by systems used in high-risk scenarios, including for consequential decisions about an individual’s access to essential goods or services or decisions taken in a criminal justice context. AI systems that pose a risk of physical harm, for example AI used to control large-scale critical infrastructure, or psychological harm or implicate an individual’s human rights should also be classed as high-risk and subject to appropriate safeguards so that the risks of the system are assessed and mitigated and so that the system remains under human control. Our recent blueprint for governing AI, proposes that an AI regulatory architecture should mirror the AI technology stack. At the top of this stack is the application layer, where information and services are delivered to users. Attention is also required at the other layers of the technology stack: for the most powerful pre-trained AI models, and for the datacentre infrastructure that makes them possible.


As part of developing such a framework it will be important to reflect the way in which AI risk is heavily shaped by use case and deployment context and that risks, and effective mitigations, at the model and application layers will be different. For most systems, potential risks will only become clear and be able to be mitigated once a model has been integrated into an application for a specific use case. A system that sees an LLM integrated into an application to analyse the sentiment of customer reviews on a website, for example, will pose different risks to a system in which the same LLM is used to assess job applications as part of shortlisting candidates. The risks of the latter system can only be effectively mitigated by the application developer and the deployer of the system given the way in which they understand the use case and deployment environment. While the model developer should develop the model responsibly and share information about its capabilities and limitations, they will not be in a position to mitigate the risks of the many different downstream use cases of which they will have little visibility.


Domestic regulation


3.              How adequately does the AI White Paper (alongside other Government policy) deal with large language models? Is a tailored regulatory approach needed?


Microsoft welcomed the AI White Paper, which marked the first steps in detailing the UK’s regulatory approach in the era of AI. As the UK Government outlined, advances in AI provide the UK with the opportunity to take advantage of the many ways this technology can improve people’s lives. However, as the technology moves forward, it is equally important that we focus on developing a regulatory framework with accountability at its core that addresses the risks the technology can pose and is agile enough to iterate.


With reference to the three layers of the technology stack: the applications layer, the model layer, and the infrastructure layer, we would argue that the UK’s AI White Paper is most focussed at the applications layer, where information and services are delivered to users and the safety and rights of people will be most impacted. We agree with the paper’s pragmatic focus on use cases and a risk-based approach to regulation where this is feasible at the applications layer. However, as outlined above, we also recommend the creation of a licensing regime for 1) developers of highly capable frontier models and 2) for operators of AI datacentre infrastructure on which these models are built.


One of the initial challenges will be to define the appropriate threshold for what constitutes a highly capable frontier model. The amount of compute used to train a model is one tractable proxy for model capabilities and is likely to be a sensible starting point for such definitions, but we know today that it is imperfect in several ways especially as algorithmic improvements lead to compute efficiencies or new architectures altogether. A more durable but unquestionably more complex proposition would be to define the capabilities that are indicative of high ability in areas that are consequential to safety and security, or that represent new breakthroughs that we need to better understand before proceeding further. Further research and discussion are needed to set such a capability-based threshold, and early efforts to define such capabilities must continue apace.


We also encourage the government to define the requirements that must be met to obtain a license to develop or deploy a highly capable AI model. First and foremost, a licensing regime for highly capable AI models must ensure that safety and security objectives are achieved. Second, it must establish a framework for close coordination and information flows between licensees and their regulator, to ensure that developments material to the achievement of safety and security objectives are shared and acted on in a timely fashion. Third, it must provide a footing for international cooperation between countries with shared safety and security goals, as domestic initiatives alone will not be sufficient to secure the beneficial uses of highly capable AI models and guard against their misuse.


To achieve safety and security objectives, we envision licensing requirements such as advance notification of large training runs, comprehensive risk assessments focused on identifying dangerous or breakthrough capabilities, extensive pre-release testing by internal and external experts, and multiple checkpoints along the way. Deployments of models will need to be controlled based on the assessed level of risk and evaluations of how well-placed users, regulators, and other stakeholders are to manage risks. Ongoing monitoring post-release will be essential to ensuring that guardrails are functioning as intended and that deployed models remain under human control at all times.


In practice, we believe that the effective enforcement of such a regime will require us to go one layer deeper in the tech stack to the AI datacenters on which highly capable AI models are developed and deployed. Much like the regulatory model for telecommunications network operators and critical infrastructure providers, we see a role for licensing providers of AI datacenters to ensure that they play their role responsibly and effectively to ensure the safe and secure development and deployment of highly capable AI models.


We would submit that in order to obtain a license, an AI datacenter operator would need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture and potentially export control compliance. As discussed below, the AI infrastructure operator will have a critical role and obligation in applying safety protocols and ensuring that effective AI safety brakes are in place for AI systems that manage or control critical infrastructure.


Lastly, we believe there is merit for the Government to explore new measures for systems designed to manage or control critical infrastructure given the potential risks these systems could pose if they were to not to perform securely and robustly. We suggest placing requirements on organisations using AI to control the operation of critical infrastructure to ensure that effective AI safety brakes are in place. (See question 5c for more detail).





a.              What are the implications of open-source models proliferating?


While focusing regulatory scrutiny on the frontier of AI capabilities, we also support open innovation in the less capable ecosystem. Microsoft has a longstanding commitment to open source software. As Satya Nadella said in July 2023, “Microsoft loves open source. … [W]e are one of the largest contributors to open source. When it comes to AI, it’s no different.” Testament to that commitment, Microsoft subsidiary GitHub hosts open source software projects from 100+ million developers around the world, including nearly 1.8 million AI projects and 11+ million repositories of open data. We have also partnered with Meta to expand Microsoft’s open model ecosystem by hosting the Llama 2 family of large language models.


The proliferation of open source models resembles the proliferation of open source software over the past thirty years. Today open source is ubiquitous: it is in nearly all software (96%) and generally makes up the majority of components in a given software stack (76%). By openly releasing AI components, including models, developers enable others to build on their work. Open-source development can increase the number and diversity of developers by giving anyone with an internet connection the ability to pick up, learn from, and modify AI models. This expanded talent pool can support AI development in the UK, facilitate government oversight of AI products and companies, and go on to create their own innovative, competing AI start-ups. Openly available AI models support transparency, as developers are able to scrutinize model weights directly and identify security and other vulnerabilities in order to repair and improve them; documentation best practices for AI systems and models have been pioneered by the open source community. Environmental sustainability of running trained models can be enhanced via open source innovations that reduce the compute required to run inference. Ultimately, this supports more people to build AI applications. It may commodify some capabilities and focus attention on delivering competition to maximize real-world benefits from AI applications. This in a sense democratizes the technology, such that it is not only in the hands of a select few.


However, the proliferation of open-source models also poses risks. Direct access to models can enable bad actors to modify them and use them for any purpose. It is important to note, however, that dual-use risks have been present throughout the history of open source software. Particular attention is warranted at the frontier of AI capabilities, where the risks may be novel. Government efforts to increase capacity to understand frontier AI model capabilities and their potential for novel misuse are welcome steps. As described above, a licensing regime that would govern the development and deployment of frontier AI models could provide welcome guidance on perquisite assessments and the limited circumstances where public release of such models may be permitted.


Aside from the most highly capable models, we see an important role for regulation in governing the malicious use of AI systems. When considering new rules, it is important to be clear-eyed about the risks and opportunities. In many cases, existing law already outlaws criminal activities that may be informed by AI systems: whether or not instructions to make weapons may be software- or AI-generated, obtaining controlled substances and seeking to use them are activities well-established as illegal. Policymakers did not ban email because of spam, and widespread encryption has catalysed a global internet-based economy.


4.              Do the UK’s regulators have sufficient expertise and resources to respond to large language models? If not, what should be done to address this?


We welcome the UK Government’s proposal to empower sectoral regulators to apply existing laws and advance a framework for responsible development and use of AI applications that is effective and coherent across sectors. Such application-level regulation is critical to ensuring that people have protection under the law and the context-specific nature of AI risks is properly accounted for.


However, it is important that the regulatory regime has a clear and sufficiently robust mandate for regulators. An essential part of this will be a framework to create consistency across regulators as they apply the principles, to drive responsible practice. Examples could include: (1) consistent ways to define AI, foundation models, regulated actors and identify high risk systems (e.g., identifying when AI has a consequential impact on an individual’s access to essential services, legal status, or life opportunities; presents the risk of physical and psychological injury; or restricts, infringes upon, or undermines the ability to realise an individual’s human rights.); (2) clarity on how and in what circumstances a regulator can request additional details about a system when there is a suspicion of harm; or (3) the role and authorities of the central risk function.


We would like to see a team responsible for overseeing the central functions of the future AI regime empowered to drive alignment across different regulators on requirements for fundamental processes, such as risk assessment and mitigation. A central coordinating infrastructure is critical to monitoring the overall effectiveness of the framework and potential systemic risks—whether delivered by a central government team or a more independent body, such as the Digital Regulation Cooperation Forum.


An important part of holding the developers and deployers of AI systems accountable will be building the capacity of lawmakers and regulators so that they have the expertise needed to understand and address the impacts of AI, including how best to apply existing laws and regulations in their sector. We welcome the approach set out in the Government’s White Paper to establish a pool of expertise to address both operational and technical capability gaps. However, regulators will also need additional guidance and resourcing to fulfil their duties under the framework. We believe the Government should resource and support existing regulators, building on their expertise to help ensure they remain responsive and adaptive.


5.              What are the non-regulatory and regulatory options to address risks and capitalise on opportunities?


As outlined above and in our response to the AI White Paper, we believe the following recommendations will help to ensure the UK has the appropriate regulatory architecture in place to realise the benefits of AI across the economy, many of which are also set out in our recent publication, ‘Governing AI: A Blueprint for the Future’:









In addition, assertions which suggest that performing text and data mining is a copyright infringement create a barrier to AI training and development, a reluctance to confidently use AI and AI outputs, and ultimately lead to a lack of investment in AI development in the UK. Performing text and data mining on a work protected by copyright is not a copyright infringement. As required under TRIPS, 9(2) copyright should not extend to ideas, procedures, mathematical concepts. Everyone should have the right to read, learn and understand these works, and copyright law in the UK includes exceptions that allow for the use of technology as a tool to enable this. Copyright concerns relating to generative AI are rightly addressed by assessing whether the generated output infringes copyright. As recommended in the Vallance report, the UK should take steps to ensure that there is clarity on this point. Many countries have recognised the need to support AI innovation by clarifying that their IP laws do not prevent text and data mining either through legislative change or guidance – such countries and entities include Japan, Singapore, Israel and the EU. The government’s plans to introduce a code of practice on this topic would be a possible vehicle to provide this clarity. Alternatively, the government could create further confidence to businesses on this topic by creating an explicit exception in copyright law for text and data mining.


Non-regulatory tools for trustworthy AI


We welcome the voluntary approach taken by the U.S. Government, which recently brought the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public. Microsoft endorsed all of the voluntary commitments presented by President Biden and independently committed to several others that support these critical goals, which focus on how we will further strengthen the ecosystem and operationalize the principles of safety, security, and trust.


Establishing this type of code of conduct early in the development of this emerging technology can play an important part in helping ensure safety, security, and trustworthiness, and can also help better unlock AI’s positive impact for communities across the UK and around the world. We encourage the UK to work with the US and other countries, including those in the G7, to build out a broader framework for AI governance that is coherent across borders and informed by discussions being advanced in countries and international institutions across the world.


We also encourage the UK to look at how to build on existing safety frameworks, such as the US National Institute for Standards and Technology’s (NIST’s) recently published AI Risk Management Framework (RMF). This framework provides a strong template for AI governance that organisations can immediately begin using to address AI risk. The AI RMF has been identified as a global best practice for AI governance whereby numerous governments, international organizations, and businesses have already validated its value. Microsoft has recently committed to implementing the AI RMF so that all our AI services benefit from it and we encourage the UK Government to explore ways to accelerate public and private sector alignment to this resource.

In the context of accountability, the NIST AI RMF also highlights the value of two important practices for high-risk AI systems: impact assessments and red teaming. Impact assessments have demonstrated value in a range of domains, including data protection, human rights, and environmental sustainability, as a tool for accountability. These assessments are typically used to document and analyse the impact of a system on various stakeholders, giving responsible parties a guided process to identify and mitigate potential harms that may arise. Microsoft’s impact assessments are designed to be iterative and encourage robust responses, including when harms or deficiencies are discovered, so that the appropriate safeguards can be implemented. Impact assessments offer significant utility as an accountability tool, encouraging organisations developing or deploying an AI system to consider risks early on and integrate mitigations throughout the respective development and deployment process, addressing them in a timely fashion. Microsoft has published its impact assessment template[7] and the accompanying guide[8] designed to help teams complete this set of documentation, with a view to contributing to the public conversation about effective impact assessment practices.


a.              How would such options work in practice and what are the barriers to implementing them?


b.              At what stage of the AI life cycle will interventions be most effective?


To mitigate risks interventions are needed across the development and deployment cycle and at the different layers of the AI stack. At Microsoft, we orient our responsible AI work around processes that seek to identify, measure and mitigate the risks a particular system may pose. This requires work at the model level and application level, throughout development and deployment. We set out a detailed case study of how this process works in relation to our recently launched new Bing that integrates Open AI’s GPT-4 mode.[9] As part of the development of the GPT-4 model, Microsoft worked with Open AI to conduct extensive red teaming of the model to assess how it would work without any additional safeguards applied to it. Our specific intention at this time was to produce harmful responses, surface potential avenues for misuse, and identify capabilities and limitations. Our combined learnings across OpenAI and Microsoft contributed to advances in model development and informed our understanding of risks and contributed to early mitigation strategies for the new Bing. In addition to model-level red team testing, a multidisciplinary team of experts then conducted numerous rounds of application-level red team testing on the new Bing AI experiences before implementing mitigations. This process helped us better understand how the system could be exploited by adversarial actors and improve our mitigations. As part of deployment, we engaged in a phased release combined with monitoring of how the system was performing while in use and providing a feedback channel for users to report concerns. This type of pre deployment risk assessment and mitigation and monitoring of a system while in use will be important to ensure systems are performing appropriately and will be particularly important for high risk systems.


To ensure risks can be mitigated across the life-cycle, it will be important for the different actors to share information, given the complex and varied nature of the AI supply chain. Model developers should share information about the capabilities and limitations of the model, use cases for which it is not suited and factors that will affect performance. We set out this type of information in our Transparency Notes for our Azure Cognitive Services.[10] Providing this information helps ensure that application developers building on the model are able to do so in an appropriate and responsible manner. In turn, the application developers should share similar information about the performance of their application so that a user can use it responsibly for the purpose for which it was intended. 


More broadly, as part of our Governing AI Blueprint we suggest a regulatory regime for frontier models to address the risks they can pose; it was informed by internal work to identify and mitigate AI risk. As we set out in the blueprint, during the initial development phase for a frontier model, we suggest a licensing regime for developers of the frontier models themselves and for the AI infrastructure operators on which these models are developed and deployed. We propose that such a licensing regime should:


1)      ensure that safety and security objectives are achieved in the development and deployment of highly capable AI models;


2)      establish a framework for close coordination and information flows between licensees and their regulator, to ensure that developments material to the achievement of safety and security objectives are shared and acted on in a timely fashion; and


3)      provide a footing for international cooperation between countries with shared safety and security goals, as domestic initiatives alone will not be sufficient to secure the beneficial uses of highly capable AI models and guard against their misuse.


Likewise, the use case approach to regulation will help to address the context specific nature of many AI risks when models are integrated into sector-specific applications and deployed more widely. In developing this framework, we encourage the Government to advance a risk-based approach, focused on addressing high-risk systems that may have adverse impacts on an individual’s life opportunities, including access to essential goods and services, pose a risk to physical or psychological safety, or implicate an individual’s human rights.


At the infrastructure layer, we suggest licensing providers of AI datacenters to ensure that they play their role in the safe and secure development and deployment of highly capable AI models. Much like the regulatory model for telecommunications and network operators and critical infrastructure providers, AI datacenters should need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture and potentially export control compliance.

c.              How can the risk of unintended consequences be addressed?


We consider it important to ensure that effective AI safety brakes are in place for AI systems that manage or control critical infrastructure, with the “safety brake” concept implemented at multiple levels. While the implementation of “safety brakes” will vary across different systems, a core design principle in all cases is that the system should possess the ability to detect and avoid unintended consequences, and it must have the ability to disengage or deactivate if it demonstrates unintended behaviour. It should also embody best practices in human-computer interaction design. To contribute to the discussion, we suggest that such requirements be focused on AI being used to control AI systems that:






International context


6.              How does the UK’s approach compare with that of other jurisdictions, notably the EU, US and China?


We commend the UK’s leadership in developing an approach to AI policy and regulation that maximises the opportunity for the UK and ensures the safe and responsible development and deployment of this technology.


We are pleased to see the commitment in the UK’s AI White Paper—as well as in the content of the International Tech Strategy and the upcoming Global Summit on AI Safety to be held in the UK later this year—on global interoperability. It is encouraging to see the UK Government commit to engaging in bilateral and multilateral frameworks to foster interoperability. Cooperation will be vital to leadership. UK efforts on AI at the G7 is a positive example of the kind of “regulatory diplomacy” ambition set out in the recent UK International Technology Strategy.


The UK has played a leading role in the development of international technical standards, and we strongly support the need for alignment with the international regulatory developments and regulatory cooperation. In order to avoid fragmentation, regulatory cooperation is essential. We recommend aligning the proposed measures with similar initiatives—this would enable all stakeholders including supervisory authorities to ensure and supervise compliance and leverage efficiencies in a harmonised manner.


As outlined below, it will be important for the UK to align to the consensus definitions that are emerging at the international level from the work of the OECD and others, including in relation to the definition of an AI system. Another important concept that is increasingly established in the international conversation is that of a risk assessment and mitigation process for high risk systems, through the use of a mechanism like an impact assessment, where the developer and deployer of a high risk systems evaluates the system for reasonably foreseeable risks that are within their respective control and implements appropriate mitigations. We suggest the UK utilise a similar approach at the core of the regulatory framework.


a.              To what extent does wider strategic international competition affect the way large language models should be regulated?


As part of helping shape a coherent international regulatory regime so that UK organisations can continue to collaborate with others across borders in developing and using world leading technology, in the context of wider strategic international competition, we recommend prioritising the following issues:




b.              What is the likelihood of regulatory divergence? What would be its consequences?


Regulatory divergence between countries in governing AI systems carries significant risks. Different approaches to regulating AI risks creating legal uncertainty for technology companies operating globally and impede valuable cross-border research collaboration. In addition, divergence could disadvantage countries with less developed regulatory regimes. Many nations still lack comprehensive policies, technical standards, and regulatory capacity related to AI systems. Divergence may perpetuate gaps in safety, accountability, and oversight across regions. This is why we believe in the importance of advancing deeper multi-stakeholder collaboration at a global level and developing regulatory frameworks aligned to international standards as the best way forward on these important issues.



September 2023



[1]              [2001.08361] Scaling Laws for Neural Language Models (

[2]              Announcing a renaissance in computer vision AI with Microsoft | Azure Blog | Microsoft Azure

[3]              ChatGPT reaches 100 million users two months after launch | Chatbots | The Guardian

[4]              Microsoft, Anthropic, Google, and OpenAI launch Frontier Model Forum - Microsoft On the Issues

[5]              Accelerating Foundation Models Research - Microsoft Research




[9]              The new Bing - Our approach to Responsible AI (

[10]              Transparency Note for Azure OpenAI - Azure AI services | Microsoft Learn