Written Evidence Submitted by ACT | The App Association

(GAI0018)

 

About ACT | The App Association

ACT | The App Association is a global trade association for small and medium-sized technology companies. Our members are entrepreneurs, innovators, and independent developers within the global app ecosystem that engage with verticals across every industry. We work with and for our members to promote a policy environment that rewards and inspires innovation while providing resources that help them raise capital, create jobs, and continue to build incredible technology. App Association members are located around the world, including the UK, all 27 member countries of the European Union, and all 435 congressional districts of the United States, showing that with coding skills and an internet connection, an app maker can succeed from anywhere.

 

  1. Introduction

Artificial intelligence (AI) is an evolving constellation of technologies that enable computers to simulate elements of human thinking, such as learning and reasoning. An encompassing term, AI entails a range of approaches and technologies, such as machine learning (ML), where algorithms use data, learn from it, and apply their newly learned lessons to make informed decisions, and deep learning, where an algorithm based on the way neurons and synapses in the brain change as they are exposed to new inputs allows for independent or assisted decision-making. Already, AI-driven algorithmic decision tools and predictive analytics have substantial direct and indirect effects in consumer and enterprise contexts and show no signs of slowing in the future. 


Across use cases and sectors, AI has incredible potential to improve consumers’ lives through faster and better-informed decision making, enabled by cutting-edge distributed cloud computing. Even now, consumers are encountering AI in their lives incrementally through the improvements they have seen in computer-based services they use, typically in the form of streamlined processes, image analysis, and voice recognition, all forms of what we consider ‘narrow’ AI. These narrow applications of AI already provide great societal benefits. As AI systems, powered by streams of data and advanced algorithms, continue to improve services and generate new business models, the fundamental transformation of economies across the globe will only accelerate.  Nonetheless, AI also has the potential to raise a variety of unique considerations for policymakers, and the App Association appreciates the Committee’s efforts to develop a policy approach to AI that will bring its benefits to all, balanced with necessary safeguards to protect consumers.

 

Our members and many other tech startups and small and medium-sized enterprises (SMEs) are at the forefront of innovation, constantly advancing new products and services. The UK government has the opportunity to develop a regulatory framework that incents further innovation while balancing the potential risks of AI. We encourage policymakers to consider the many angles and interests that AI impacts before making statutory or regulatory changes.

 

 

 

 

 

 

 

 

  1. Inquiry response

 

  1. How effective is the current governance of AI in the UK?
    1. What are the current strengths and weaknesses of current arrangements, including for research?

In 2021, the UK Department of Digital, Culture, Media, and Sports (DCMS) released a 10-year National AI Strategy, outlining its plans for regulating and promoting AI adoption in the UK. We welcome DCMS’ intention to establish the ‘most pro-innovation regulatory environment in the world’ and commit to making the UK the best place to live and work with AI’ by 2031. The App Association is encouraged by the various steps that the UK government and regulators have taken to achieve this goal, including the interim white paper published by the Office for AI in July 2022, AI Standards Hub launched in January 2022, and the Intellectual Property Office’s (IPO) response to the consultation regarding copyrights and patents for AI in June 2022. Looking ahead, we commend the UK government for its plans to publish a national AI research and innovation programme, a data availability framework for AI, and a national strategy for AI in health and social care. We are also encouraged by the UK government’s work with the Alan Turing Institute and regulators to examine regulators’ existing AI capacities, and their ability to deal with complexities stemming from cross-sectoral AI systems.

 

The current approach indicates that the UK government is aiming for a more flexible and iterative approach to AI regulation, collaborating with sector-specific regulators (e.g. the Information Commissioner’s Office (ICO) and the IPO), and updating existing legislation and guidance to meet AI-related challenges our societies will likely face. As the use of AI increases, it seems sensible to leave room to establish AI-specific rules where they are needed as the technology and its uses evolve. We agree with the House of Lords’ view in the 2020 publication ‘AI in the UK’, which stated that ‘blanket AI-specific regulation, at this stage, would be inappropriate’ and that ‘existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed’.

 

Although adapting and reviewing current regulations is a flexible approach, it also presents some challenges, such as unclear rules for businesses, inconsistent regulatory mechanisms, and regulatory overlap, which can lead to gaps and uncertainty. In this context, existing regimes and any new AI regulation must be clearly distinguishable for all stakeholders. Generally, the UK government should avoid technology-specific regulations on particular modalities, including AI, and instead develop outcome-focused and technology-neutral regulations; AI-specific regulation should only be undertaken when a demonstrated need is established for such an approach. Already, the regulatory and policy environment for AI is just as uniquely complex as the technical AI systems landscape itself. UK businesses would benefit from more clarity on how various domestic and international initiatives will impact them and interact with each other as well as where there may be sectoral or regulatory gaps. Providing such government-led research and analysis to AI stakeholders, including businesses, policymakers, and regulators would offer valuable information for future law making as well as AI systems development.  

 

 

 

 

 

 

  1. What measures could make the use of AI more transparent and explainable to the public?

The App Association believes the government could take a variety of measures to make the use of AI more transparent and explainable to the public. First, the government should openly support and facilitate AI research and development in its AI policy frameworks. It can do so by prioritising and providing sufficient funding while also ensuring adequate incentives (e.g. streamlined availability of data to developers, tax credits) are in place to encourage private and non-profit sector research. Transparency research should be a priority and involve collaboration among all affected stakeholders who must responsibly address the ethical, social, economic, and legal implications that may result from AI applications.

 

Second, to increase transparency and explainability, the government should ensure adequate quality assurance and oversight. Specifically, policy frameworks should utilise risk-based approaches to ensure that the use of AI aligns with the recognised standards of safety, efficacy, and equity. Providers, technology developers and vendors, and other stakeholders all benefit from understanding the distribution of risk and liability in building, testing, and using AI tools. Policy frameworks addressing liability should ensure the appropriate distribution and mitigation of risk and liability. Specifically, those in the value chain with the ability to minimise risks based on their knowledge and ability to mitigate should have appropriate incentives to do so. Some recommended guidelines include:

 

Third, policy frameworks should require designs of AI systems that are informed by real-world workflows, human-centred design and usability principles, and end-user needs. AI systems solutions should facilitate a transition to changes in the delivery of goods and services that benefit consumers and businesses. The design, development, and success of AI should leverage collaboration and dialogue among users, AI technology developers, and other stakeholders to have all perspectives reflected in AI solutions.

 

Fourth, policy frameworks should ensure AI systems are accessible and affordable. Significant resources may be required to scale systems. Policymakers should take steps to remedy the uneven distribution of resources and access and put policies in place that incent investment in building infrastructure, preparing personnel and training, as well as developing, validating, and maintaining AI systems to ensue value.

 

Ultimately, the success of AI also depends on ethical use. A policy framework will need to promote many of the existing and emerging ethical norms for broader adherence by AI technologists, innovators, computer scientists, and those who use such systems. Policy frameworks should, therefore:

 

Related to ethics, we must also address bias issues in AI. The bias inherent in all data, as well as errors, will remain one of the more pressing issues with AI systems that utilise machine learning techniques. Any regulatory action should address data provenance and bias issues present in the development and use of AI solutions. Policy frameworks should require the identification, disclosure, and mitigation of bias while encouraging access to databases and promoting inclusion and diversity, as well as ensuring that data bias does not cause harm to users or consumers.

 

Further, we recommend enabling collaboration and interoperability and easing data access and use by creating a culture of cooperation, trust, and openness among policymakers, AI technology developers and users, and the public. Policy frameworks should also support education for the advancement of AI, promote examples that demonstrate the success of AI, and encourage stakeholder engagements to keep frameworks responsive to emerging opportunities and challenges. Educating consumers on the use of AI in the service they are using and academic education that advances the understanding of and ability to use AI solutions are critical in increasing explainability to the public.

 

  1. How should decisions involving AI be reviewed and scrutinised in both the public and private sectors?
    1. Are current options for challenging the use of AI adequate and, if not, how can they be improved?

Any pursuit of AI regulation should be predicated on the establishment of the risks and harms that would be mitigated by the potential regulation, with a strong empirical evidence base. Without this crucial first step, AI regulations may easily be shaped around infeasible or unrealistic expectations and edge use cases.

 

Further, the App Association is a proponent of taking a risk-based approach when it comes to regulating emerging technologies such as AI that scale requirements to the harm presented by the particular use case(s), including when setting up options for challenging the use of AI, avoiding one-size-fits-all requirements (such as the EC’s proposal to arbitrarily ban the use of AI for entire use cases) that needlessly create friction for innovation. We further believe that a regulation that encourages trustworthy technology can increase both customer loyalty and the responsible uptake of AI systems. That said, as businesses that develop emerging technologies benefit from clear rules, a clear definition of what ‘high-risk’ AI includes is necessary and what the responsibilities of such high-risk AI providers, operators, and users would be, which is a goal easily achievable under a risk-based approach and can help define higher-risk AI uses appropriately meriting greater scrutiny and risk management measures. We strongly encourage regulators, in exposing AI to new regulations, to responsibly promote data access, including open access to appropriate machine-readable public data, development of a culture of securely sharing data with external partners, and explicit communication of allowable use with periodic review of informed consent.

 

We also support the use of transparency and ethics standards and a reasonable level of algorithmic transparency in decision making. Requirements concerning the testing, training, and validation of algorithms, ensuring human oversight, as well as meeting accuracy, robustness, and cybersecurity standards could be useful to implement in this context, and the development of such standards can only happen through public-private partnership. At the same time, we are concerned by the prospect of regulators having broad authority to demand access to businesses’ data, source code, or algorithms, as proposed in the EU’s AI Act. Such rules require the implementation of sufficient safeguards regarding the circumstances under which such information would have to be disclosed to protect valuable intellectual property, trade secrets, and cybersecurity. Without such safeguards, investments in UK data and data-driven innovations will decrease.

 

  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

Noting that AI itself is not a separate and distinct sector, many different regulations already apply to AI now. When regulating the use of AI, the government should consult with industry experts and domain-specific regulators to understand in detail each context and risk landscape in which an AI system operates. The regulation of AI uses should also start with a clear definition of AI. We believe that AI is an evolving constellation of technologies that enable computers to simulate elements of human thinking  learning and reasoning among them. An encompassing term, AI entails a range of approaches and technologies, such as machine learning (ML) and deep learning, where an algorithm based on the way neurons and synapses in the brain change due to exposure to new inputs, allowing independent or assisted decision making.

 

AI-driven algorithmic decision tools and predictive analytics are having and will continue to have, substantial direct and indirect effects on UK citizens. Today, UK citizens encounter AI in their lives incrementally through the improvements they have seen in computer-based services they use, typically in the form of streamlined processes, image analysis, and voice recognition. We consider these forms of AI “narrow” AI, which already provides great societal benefit. A clear definition of AI is crucial before taking further regulatory steps, as a too-broad definition creates uncertainty for AI developers, providers, operators, and users.

 

We reiterate that AI uses should be regulated following a risk-based approach (see question 3). Regulation of AI uses must clearly distinguish between minimal, limited, high, and unacceptable risks and set out different rules for each of them. Especially a minimal risk category which will likely encompass most available AI systems needs detailed explanations of which systems fall under this category. While these low-risk technologies may not need to be subject to explicit new legal requirements, even the adoption of codes of conduct for their regulation could shape their development, although they carry minimal-to-no risk. We agree that soft-law frameworks could foster transparency, human oversight, and robustness but we encourage lawmakers to promote the voluntary application of these principles.

 

Concerning regulatory oversight, the government should consider the ease of compliance for SMEs as well as their ability to navigate potential legislative overlap. Establishing several new institutions on top of current laws governing AI, rather than leveraging current rules and regulators, will likely slow down the development and use of new AI products and services by adding compliance burdens and delays. Thus, we believe maximising regulatory coherence and coordination is especially important as it remains unclear how potential AI regulation could interact or integrate with other existing laws. Similarly, when it comes to timely reporting of adverse events to the relevant oversight bodies for appropriate investigation and action, we believe that the relevant authorities should coordinate in the interpretation and enforcement of any new rules.

 

  1. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
    1. Is more legislation or better guidance required?

As we described in question 1, the current legal framework seems to rely on adapting and reviewing current regulations, which can lead to unclear rules for businesses, no consistent regulatory mechanism, and potential regulatory overlap. Many stakeholders likely struggle to understand the enforcement of AI rules and how to comply with them. We, thus, believe a clear explanation of both stakeholders’ and regulators’ responsibilities would be extremely useful. The App Association also looks forward to further government guidance regarding data availability for AI and for AI in health and social care.

 

AI offers immense potential for widespread societal benefits, which is why the government should foster investment and innovation in any way practicable. Our members both use and develop solutions that include AI, and those are in turn used by countless UK citizens. As society moves to adopt these technologies on a greater scale, it is important that small business developers can contribute to this important trend. Any new legal framework should be both understandable and feasible for SMEs and startups, be future-proof, and incent innovation, research, and development in the AI space for both low and high-risk systems across the UK.

 

An exemplar on which the UK government may wish to model their next steps is the U.S. National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (https://www.nist.gov/itl/ai-risk-management-framework). NIST’s AI Risk Management Framework, which is being developed through a consensus-driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input, is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

 

  1. What lessons, if any, can the UK learn from other countries on AI governance?

Again, we urge the UK government to strongly consider taking the approach of the U.S. NIST in developing a voluntary AI Risk Management Framework (see question 5).

 

Separately, the App Association has participated in various stages of the European Union’s law-making on the AI Act and believes the UK could benefit from monitoring the progress of that legislation. The process has demonstrated the importance of clear definitions and the feasibility of following a risk-based approach. Considering this, we believe the UK is taking a sensible and reasonable approach to developing its own path for AI regulation. 

 

(November 2022)