Written Evidence Submitted by Salesforce

(GAI0105)

 

Introduction

 

Salesforce UK Limited (“we,” “us,” or “Salesforce”) welcomes the opportunity to respond to the Science and Technology Committee’s inquiry on the governance of artificial intelligence (AI). We look forward to continuing our engagement with the committee and contributing to the government’s white paper on AI governance in due course.

 

Salesforce supports the UK’s ambition to remain a world-leading regulatory regime for technology and welcomes the committee’s timely inquiry.

 

About Salesforce

 

Salesforce is a global leader in cloud enterprise software for customer relationship management (CRM), providing software-as-a-service (“SaaS”) and platform-as-a-service (“PaaS”) offerings to businesses. Founded in 1999, Salesforce provides enterprise-focused software to businesses, governments, and other organisations around the world. We operate in the

business-to-business (B2B) environment, and our customers represent companies of all sizes and across all sectors.

 

Salesforce has been operating in the UK since 2004. The UK is a strategically important market for the company and one of the largest outside the US. According to a 2021 IDC study, our ecosystem of customers, partners and developers is estimated to help create 443,400 jobs in the UK and contribute over $71 billion to the UK’s GDP between 2020 through 2026.

 

Salesforce and AI

 

The transformational power of AI should be available to every organisation and innovator. However, as with most technologies, regulation has a role in ensuring innovation serves ethical purposes. Ahead of regulation, Salesforce has dedicated great resources to research around the ethical and humane use of AI, amid global interest in the topic. It has been an active participant in this discussion internationally and was among the first tech companies to establish a dedicated unit of the business dealing with technology ethics.

Salesforce’s AI solution, Einstein, is built into several of Salesforce’s products, enabling organisations of all sizes to deliver smarter, more personalised experiences to customers and their end users across sales, service, marketing and digital commerce. Salesforce is delivering more than 10 billion AI-driven predictions to its customers every day.

 

Those predictions have the power to improve and transform the way businesses operate and how people live and work. Some examples of this include:

 

1.      Features enabling sales teams to identify which leads are most likely to generate revenue

2.      Enabling customer relations teams to intelligently route customers to appropriate support personnel, or in some cases identify places where a customer may be dissatisfied but overlooked

3.      Enabling marketing teams to ensure that customers are sent at the ideal time for the customer and recommending content to send to them

4.      Provide insights to e-commerce operators to optimise the organisation of their online presence and provide recommendations to shoppers

 

In a service-led economy such as the UK’s, tools like these ensure competitiveness, increased revenue and augment and simplify employees’ day-to-day work.

 

Salesforce takes industry-leading measures to ensure its products are used ethically and to guide its end customers in the same way. Our Office of Ethical and Humane Use of Technology works across product, law, policy, and ethics to develop and implement a strategic framework for the ethical and humane use of technology across our company. Through this work, we have established a set of guiding principles on ethical use that safeguards areas such as individual rights, data privacy, and human safety.

 

Comments on the UK’s approach to AI governance

 

1.  How effective is current governance of AI in the UK? What are the strengths and weaknesses?

 

Internationally, the UK compares very favourably in terms of emerging AI regulation. The UK has been one of the world’s more receptive markets to AI adoption. Its framework broadly supports innovation and has successfully nurtured a dynamic ecosystem of talent. UK regulators and initiatives are highly regarded and influential internationally. Existing regulation also provides important safeguards regarding the use and deployment of AI that underpin consumer confidence in their uptake.

 

In particular, data protection law plays a significant role in the regulation of AI, particularly providing a right to human recourse for automated decisions. In addition, the Information

Commissioner’s Office (ICO) has put forward significant guidance on how it expects companies to comply with requirements in the Data Protection Act 2018 and UK GDPR. Guidance covers governance arrangements, explicability and a requirement to avoid discrimination.

 

However, as AI begins to reshape other industries, there will be a significant challenge for most of the UK’s regulators to be able to gain relevant expertise in AI and data issues. This will need to be met with adequate resources and a consistent approach to rulemaking, even if the rules themselves will need to be tailored, to avoid regulatory fragmentation and barriers to entering the UK market.

 

 

2.  What measures could make the use of AI more transparent and explainable to the public?

 

Transparent AI

 

A key factor in the underlying concerns around AI is the potential lack of transparency surrounding AI algorithms, including how they collect and process data and how this translates into service provision from digital platforms.

 

The complexity of so-called deep learning techniques means that unstructured data meets an algorithm that makes predictions or decisions without human intervention. Without explanation about what factors led to the decision and how it occurred, challenges arise where the decisions have corresponding societal implications (i.e., social justice systems, hiring decisions, financial lending).

 

Consumers should be able to understand the “why” behind each AI-driven recommendation and prediction so they can make informed decisions, identify unintended or undesirable outcomes, and mitigate harm. This should include being told clearly when an AI system is used and how in their case.

 

However, regulation should acknowledge that in many cases, the algorithm and the data set it operates on are controlled by two different parties. Salesforce uses tools such as model cards - standardised documentation procedures that communicate how a particular AI application works

- to help our customers and end-users.

 

Existing initiatives such as the ICO’s guidance on explainability in AI are helpful foundations for companies but other sector regulators may need to complement this guidance for specific audiences. We also urge a continued focus on educating the public on AI; while there has been a significant effort in recent years to address the skills challenge at the advanced-study level, there will need to be a focus on ‘literacy’ in AI.

 


 

 

3.  How should decisions involving AI be reviewed and scrutinised in both public and private sectors? Are current options for challenging AI use adequate and, if not, how can they be improved?

 

Perhaps more importantly than transparency is accountability. This is particularly important when decisions taken by AI, or highly informed by AI, have significant consequences for wellbeing or where their use can perpetuate existing societal biases (for example finance, criminal justice decisions).

 

While Salesforce’s products are not designed to automatically make decisions with legal or similarly significant effects, we support efforts to ensure that humans reviewing AI recommendations are truly empowered to be a meaningful check on the system when the potential consequences are profound. However, for meaningful accountability, individual sector regulators will need to ensure they are appropriately equipped to understand AI as it affects their sector and are able to therefore design appropriate guidelines for liability.

 

4.  How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

How AI should be regulated

 

The key principles on which AI regulation should be focused are:

 

    1. Human-rights focus on the development of AI systems: Salesforce supports the creation and drafting of guidance which provides a framework around the ethical development and deployment of AI systems. Salesforce supports governments taking a risk-based approach to AI regulation, but also encourages them to look at how regulations ensure human rights. Regardless, Salesforce believes that any

forward-thinking rules must address the potential harms posed by unfair or biased AI.

    1. Nuanced approach to regulating bias in high-risk AI: Salesforce works to ensure that those impacted by our systems are not discriminated against or otherwise negatively impacted as a result of their race, gender, age, or other protected characteristic (as outlined by our acceptable use policy (AUP). However, many of the applications of Salesforce Einstein do not pose a high risk to the rights and freedoms of individuals (e.g., the AI decides the best time of day to send an email to maximise the open rate). Salesforce would support a regulatory approach that seeks to mitigate the harmful outputs of applications of AI that pose a high risk to the rights and freedoms of individuals, as opposed to regulating low-risk applications (such as those SF uses) which have a lesser risk of bias.
    2. Outcome focused: As the government has indicated in its regulatory strategy, we support the principle that regulation of AI should prioritise outcomes rather than regulating technology or data sets as such.

 


 

    1. Global harmonisation: As governments develop their stance on AI, Salesforce encourages these governments to look to, and draw from existing definitions, frameworks, and other legislative efforts to draft their approach to AI. Harmonisation is important as AI is a technology used globally that will evolve quickly, which necessitates a more nimble approach. To create a more agile regulatory approach, governments should build the regional-specific legislation based on globally accepted definitions and approaches. Too many divergent approaches will complicate the ethical development and deployment of cutting-edge AI technology and could lead to companies scaling back or not even offering services in markets. Aside from questions of good governance, playing a significant role in these conversations also ensures that British-developed AI applications can scale globally.

 

The government’s recent regulatory strategy on AI in our view substantially meets these principles. We support an approach that is sector-and-context-led that is focused on outcomes, rather than on horizontal regulation that creates too many ex-ante conditions on AI products.

 

Regulatory competences

 

Salesforce supports the approach of upskilling existing sector regulators to deal with emerging risks and opportunities in their sector for the deployment of AI. However, as many AI providers will support multiple applications in a number of sectors, a degree of consistency and harmonisation will be required. For example, defining a common approach to risk and enforcement procedures will be desirable to avoid a situation where AI providers operating in situations of comparable risks face starkly differing regulatory requirements.

 

We would also call on policymakers to keep in mind the cumulative effect of regulation. For example, the UK should avoid a situation where duplicative reporting requirements are introduced.

 

 

5.  To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

 

We refer the committee to our answer to question (1).

 

6.  What lessons, if any, can the UK learn from other countries on AI governance?

 

The UK’s approach is distinctive but welcome, focused on addressing outcomes nimbly rather than attempting to regulate all current and potential AI applications through all-encompassing regulation.

 


 

However, there are significant models for UK regulators to follow domestically and internationally in terms of guidance and regulatory approaches. Some notable examples are Singapore’s Model AI Governance Framework, the US Office for Management and Budget’s Guidance for Regulation of AI Applications and Australia’s AI Ethics Framework.

 

 

(November 2022)