15              shedding light on ai – a framework for algorithmic oversight

Written evidence submitted by 5Rights Foundation (OSB0206)

 

Shedding light on AI

A duty to regulate automated decision-making systems that impact children

 

Table of Contents

Introduction

Definitions

Four step model for regulating AI

Regulatory duty to investigate

Proposed amendments for the Draft Online Safety Bill

Conclusion

 


Introduction

The Draft Online Safety Bill exemplifies the UK government’s commitment to putting children front and centre of making the UK the safest place in the world to be online. It builds on the increasing global consensus that digital services that impact on children must be designed for their use. Childhood is a time of experimentation and personal growth, and while no environment is entirely risk free, digital services “likely to be accessed by children” must be designed to be private, safe and rights respecting - by default.

Central to the digital world is artificial intelligence, commonly referred to as AI. AI is not a standalone or fixed technology but plays a part in automated decision-making (ADM) systems and many other data-driven features common across digital services. Automated systems shape the experiences of children and young people in the digital world, both as a result of their direct engagement, for example receiving friend/follower or content recommendations, and from systems that they may not interact with directly, for example automated decision-making used to allocate welfare funding.

Much emphasis is put on the challenges of regulating new and emerging technologies, but AI is not new. The term ‘AI’ was coined in the 1950s to describe the science and engineering of machines making automated choices against specific criteria based on available information. In many ways, the word 'intelligence' is used to give humans confidence in the efficacy and authority of machine-made choices. Since then, huge advancements in the application of AI and greater availability of data have led to more sophisticated, data-driven decision making. Systems that use AI are still human-made with specific objectives, design goals, chosen inputs, a set of rules by which information is given importance or weight, and a combination of outcomes and outputs. At each of these stages, automated decisions are made that are often imperceptible to those they impact, particularly if they are a child.

Automated decision-making sits behind features that are ubiquitous across services likely to be accessed by children. It can support children to navigate the online world and the mass of content available, and help them to identify activities and outcomes that are useful or beneficial to them. But there are also many situations when automated decision-making systems undermine their rights or put them at risk. For example;

The online safety duties set out in the draft Online Safety Bill offer a vision of what a responsible digital world looks like. However, to meet the safety objectives of the Bill, Ofcom must not only have the tools but a duty to investigate algorithms on behalf of children, and an agreed standard by which to assess them. A duty of this kind with the four-step process described in this paper would ensure the risks to children created by algorithms are identified, eliminated, mitigated or effectively managed.

This four-step process is platform neutral and can be applied across different sectors, including but not limited to social media, entertainment, health and education. It can also be applied to different parts or features of a service, including advertising, content recommendation, moderation and reporting.

Such a regime would give clarity to businesses in fulfilling their safety duty to children and power to the regulator to inquire, analyse and assess whether a system is conforming to requisite standards. When new risks and harm are revealed, they can act as an early warning, especially when that harm was an unintentional by-product of an automated decision-making process optimised for another purpose.

This short paper builds on the work of many in the international community, notably the Centre for Data Ethics and Innovation[6], UNICEF[7], IEEE[8], the Ada Lovelace Institute[9] and the Council of Europe[10]. We are grateful for their expertise and recognise that this practical application of their work could not have been done without their thoughtful and detailed insights.

Thanks are due also to Dr Rebekah Tromble, Associate Professor in the School of Media and Public Affairs and Director of the Institute for Data, Democracy, and Politics at George Washington University. Dr Tromble developed the four-step model articulated in this report.

5Rights is committed to building the digital world young people deserve. That world is one in which they share the benefits of digital engagement as participants, citizens and consumers, and in which businesses respect and uphold their existing rights and respond to their needs and evolving capacities - automatically.

 

 

Definitions


Four step model for regulating AI 

The four-step model set out below offers a mixed method approach to algorithmic oversight. It describes how regulators can evaluate each element of an automated decision-making process, from the goals, inputs, implementation and outcomes, to ensure that applications of AI meet the established rights and needs of children, as set out in:

Any assessment of automated decision-making systems must have the flexibility to uncover harms that are currently unknown or not anticipated. It must also allow for potential improvements or benefits to be identified so that they might be shared and used to guide best practice.

 

  1. Understand the design goals

Aim:

Algorithms are formulated with a purpose and intended outcomes. In assessing the fairness and appropriateness of algorithms, it is important for the regulator to understand the original intent and goals of its creators and how those goals evolved over time, by asking the following questions:

  1. What was/were the problem(s) or challenge(s) those designing the algorithm set out to address?
  2. What was/were the intended outcome(s)?
  3. Why was this product, feature or process considered necessary?
  4. Who was involved in defining the problem(s) and desired outcome(s)—including internal and external stakeholders? What was their role in shaping the understanding of the problem(s) and desired outcome(s)?
  5. How and why did any of these things change over time?

Method:

  1. Consider the data inputs

Aim:

Every algorithm contains a series of inputs — data points and variables that can be thought of as the “ingredients” of the algorithm. Unfair, discriminatory or biased outcomes are often the result of problematic data (“garbage in, garbage out”). It is therefore essential that any framework intended to examine algorithmic fairness assess the quality and appropriateness of the data used to build and train the algorithm, by asking the following:

  1. What features (variables) did the algorithm’s designers want to include as inputs and why?
  2. Were they able to include those features? Did they have to settle for proxies and/or exclude some features altogether and why?
  3. What dataset(s) was/were used as input(s) for building, training, and testing the algorithm?
  4. Were other datasets considered? For training/testing? For final implementation? If not, why not?
  5. If so, what were the perceived advantages and disadvantages, strengths and weaknesses of this/these datasets compared to other options?
  6. Were multiple datasets and/or features tested? If so, how were they evaluated? And why were the final datasets/features selected?
  7. Who had input into these decisions, and what was their role in the process?

Methods:

 

  1. Assess the model selection and execution

Aim:

If data inputs are the “ingredients” of an algorithm, the mathematical model and its parameters offer the instructions for how to put the algorithmic recipe together. They lay out how the inputs should be combined, at what point and in what amount, as well as the ways in which those inputs might be altered or transformed. Careful scrutiny of the model and the assumptions it is built upon is needed to assess its appropriateness. Note that such scrutiny is possible even with machine learning algorithms. The questions to consider as part of this scrutiny include:

  1. What is the mathematical formula/model applied?
  2. Why was this model selected?
  3. What assumptions are built into this model?
  4. Did those designing or implementing the algorithm deviate from any of the assumptions built into the model? If so, how and why?
  5. Within the model, what is being optimised? How is this optimisation carried out (e.g., how are the various features weighted)?
  6. How and when was the model tested and changed/updated?
  7. When changes were made, what were the reasons for making those changes?

Method:

 

  1. Identify outputs and outcomes

Aim:

After an algorithm is launched, it will generate certain outputs. It is important to examine these outputs to reveal whether the model performs as intended. However, at this stage, it is also important to look at the actual outcomes — the real world impacts generated by the algorithm(s) and its uses.

The previous three steps help to determine why and how something went wrong, what elements of the design and implementation result in discrimination, disadvantage, exploitation, manipulation, or rights violations. However, the output (step four) is likely to be the first place that harm is identified and the stage at which it is shown that the four step process is necessary.

Many observers note that algorithms are not autonomous, neutral entities. They are designed by people, with all the biases, blind spots, and other foibles associated with being human. It is therefore crucial to examine the interplay of technical features on the one hand, business decisions and human interactions on the other. The regulator will need researchers and investigators with training in the social sciences as well as computer scientists to conduct such assessments.

Below we lay out the three lenses through which to examine algorithmic outputs and outcomes. First, we describe assessments of the ways in which relevant companies interpret outputs and outcomes, as well as their techniques for mitigating perceived harms. Second, we outline a broad approach for considering how users interact with and are impacted by algorithms. Finally, we discuss broad approaches to uncovering impacts on society as a whole.

 

 

  1. Companies

Aim:

To examine how either the company that designed the algorithm or companies that make use of those algorithms evaluate outputs and outcomes.

Questions:

  1. What model outputs (variables) does a company use internally? (I.e., What outputs matter to them and why?) In what ways do they use these outputs?
  2. What is the internal process for evaluating the performance of an algorithm? What standards are applied? What metrics are applied? By whom?
  3. What is the internal process for determining whether an algorithm should be changed? Who is involved in this process? Who makes final decisions and how?
  4. What, if anything, is the company doing to assess larger impacts on users and society?
  5. If such assessments occur, are they ad hoc or systematic?
  6. What techniques and methodologies are used for such an assessment? What standards and metrics are applied? Who is involved in this process and how?
  7. Are changes ever made to algorithms on the basis of such assessments? What is the process for doing so? Who is involved in this process? Who makes final decisions and how?

Method:

 

  1. Users

Aim:

To assess whether users’ reasonable expectations for how they interact with and what they expect from an algorithm align with the actual outcomes, and whether any harms (either perceived by the user or not) accrue.

Questions:

  1. What, if anything, do users understand the algorithm to be doing? Are they even aware that an algorithm is involved? If they are, do they perceive specific advantages and disadvantages to the algorithm?
  2. What do users expect from the algorithm? Are outcomes aligned with those expectations?
  3. Is the algorithm creating disparities between users and non-users and/or between different types of users?
  4. Is the algorithm limiting user choice(s)? If so, in what ways? And what are the consequences (positive or negative) of those limitations?
  5. Does the algorithm directly or indirectly exploit user vulnerabilities?
  6. Does it directly or indirectly manipulate users?
  7. Does it violate users’ rights or contribute in any way to the violation of those rights?

 

Method:

 

  1. Societal impacts

Aim:

To understand the social, financial, environmental and human impacts of automated decision-making systems.

Questions:

  1. Is the algorithm contributing directly or indirectly to social harms? If so, in what ways? And to whom? Is the harm caused by certain features of the algorithm? Can these harms be mitigated by changes to the algorithm? Can these harms be mitigated without causing harm to others?
  2. Is the algorithm benefitting certain members of society? If so, are those benefits accrued fairly and equitably?
  3. Is the algorithm benefitting society as a whole? If so, in what ways? Can those benefits be amplified or expanded?
  4. Are there “best practice” lessons to be learned from the design and implementation of this algorithm?

Method:


Regulatory duty to investigate

Children cannot be expected to understand or take action against automated decision-making or algorithmic unfairness, it is unlikely that they have the developmental capacity, the knowledge or the resource to understand the subtle, cumulative or even acute nudges and impacts those automated systems have on their online experience. In fact – many children do not understand that an algorithm could be responsible for introducing them to a 'suggested friend’ nor do they have the tools to prevent an onslaught of automated harmful material. The Online Safety Bill must give Ofcom not only the powers to interrogate automated systems but create the expectation that they will be actively analysing automated decision-making systems and algorithms of services that impact on children – a duty to investigate.

 

The proposed duty is similar to that of the Financial Conduct Authority, which has a duty to investigate when it appears that a regulated person or investment scheme has failed to protect consumers or might have a significant adverse effect on the financial system or on competition. Similar duties are proposed for a new pro-competition regime that would give the Digital Markets Unit (part of the Competition and Markets Authority) a duty to ‘monitor’ markets and the activities of firms to identify breaches of the statutory code of conduct.

 

In order to fulfil this duty, Ofcom must have the expertise, resource, and processes in place to scrutinise the design goals, data inputs, model selection and outputs and outcomes of algorithms. Where there is evidence to show such systems are discriminating against or systematically disadvantaging children or violating their rights, Ofcom should set out a mandatory course of action for compliance.

While transparency is a key component of the four-step process set out above, decades of research show transparency alone can result in layers of obfuscation and does not always result in better systems or more positive outcomes. The value of transparency lies not in the availability of information itself, but in the way it allows for scrutiny and accountability. A duty for Ofcom to undertake the four steps on automated decision-making systems that impact on children would deliver that accountability.

Companies often use commercial sensitivity as a defence to usurp transparency reporting requirements. On the whole, this should be resisted, and where there are legitimate commercial sensitivities, the regulator must have the power to maintain private oversight.


Proposed amendments for the Draft Online Safety Bill

Part 2, Chapter 2 – Providers of user-to-user services

In addition to the existing record keeping and review duties of clause 16, insert

(Before 4) A duty to keep a written record relating to the design and operation of algorithms and automated decision-making systems, and any decisions relating to the operation of such systems as may be needed by the regulator.

Part 2, Chapter 3 – Providers of search services

In addition to the existing record keeping and review duties of clause 25, insert:

(Before 4) A duty to keep a written record relating to the design and operation of algorithms and automated decision-making systems, and any decisions relating to the operation of such systems as may be needed by the regulator.

Part 4, Chapter 1 – OFCOM’s General Duties

NEW Clause 59: Duty to investigate and audit the design and operation of algorithms and automated decision-making systems that impact children in relation to the findings of OFCOM’S risk assessment as set out clause 61 (2)(c).

(1) A duty:

(a) To provide statutory guidance in relation to children’s rights and applicable law as it relates to automated systems to meet the safety objectives (clause 30)

b) Where there is evidence from OFCOM’s risk assessments that an automated decision-making system is discriminating against or systematically disadvantaging groups of children or violating their rights, exploiting vulnerabilities, manipulating, or withholding information, OFCOM will investigate and audit such systems.

Part 4, Chapter 3 – OFCOM’s Risk Assessments

Clause 61 (1) OFCOM must carry out a risk assessment to identify, assess and understand the risks of harm to individuals presented by regulated services.

(2) The risk assessment must, amongst other things—

(a) assess the levels of risk of harm presented by regulated services of different kinds, including by giving separate consideration to

(i) the risk of harm to individuals in the United Kingdom presented by illegal content,

(ii) the risk of harm to children in the United Kingdom, in different age groups, presented by content that is harmful to children, and

(iii) the risk of harm to adults in the United Kingdom presented by content that is harmful to adults; (

b) identify characteristics of different kinds of regulated services that are relevant to such risks of harm, and assess the impact of those kinds of characteristics on such risks.

NEW Before (3) to identify the risk of harm to children from the design and operation of algorithms and automated decision-making

(3) OFCOM must develop risk profiles for different kinds of regulated services, categorising the services as OFCOM consider appropriate, taking into account

(a) the characteristics of the services, and

(b) the risk levels and other matters identified in the risk assessment.

(4) As soon as reasonably practicable after completing a risk assessment, OFCOM must publish a report on the findings.

NEW Before (5) Where findings of the risk assessment indicate that algorithms or automated systems are found to be contributing to or likely to harm children OFCOM Must commence an investigation and audit of the regulated service provider(s) to discharge its clause 59 duty.

(5) OFCOM must ensure that the risk assessment is kept up to date.

(6) In this section the “characteristics” of a service include the functionalities of the service, its user base, business model, governance and other systems and processes. (7) In this section— “content that is harmful to adults” has the meaning given by section 46; “content that is harmful to children” has the meaning given by section 45; “illegal content” has the meaning given by section 41.

Part 4, Chapter 5 – Ofcom’s Information powers

Clause 70: Power to require information

(4) The information that may be required by OFCOM under subsection (1) includes, in particular, information that they require for any one or more of the following purposes—

(a) the purpose of assessing compliance by a provider of a regulated service

(m) the purpose of investigating algorithms and automated decision-making systems, in relation to OFCOM’s duty under section 59             


Conclusion

A child must not be asked to police the automated decisions of the tech sector. The industry is worth over $5 trillion to the world economy[17] and is central to children’s lives and life outcomes. Algorithmic oversight is critical if the next generation of digital technologies, products and services are to offer children safety and respect for their rights, by design.

The four-step model of algorithmic oversight will allow meaningful oversight of the goals, inputs, implementation and outcomes of algorithms and automated decision-making systems. This transparency will drive a change in corporate behaviour that meets the expectations of parents and children and fulfil the promises government has made to them. By giving the regulator a duty to interrogate automated decision-making systems on behalf of children, and service providers a clear process by which it will be done, the risks to children from automated decision-making systems can be reduced – by default and design.

There is no silver bullet to fix all the ills of the digital world or to guarantee children will be safe from harm, either through regulation or technological development. But the argument that regulation and accountability stifle innovation or impose limits on a child’s freedom in the digital world is simply untrue. Each wave of regulation has been met with creative and practical solutions. As lawmakers across the globe look to the UK to continue its singular place as leader in online safety for children algorithmic oversight would continue that march.

It is in the interests of all parties to have a more equitable and trustworthy system of oversight that allows growth and innovation but which reduces negative outcomes for children.

To do nothing is no longer an option.

 

8 October 2021

 

 


[1] https://5rightsfoundation.com/uploads/But_How_Do_They_Know_It_is_a_Child.pdf

[2] https://www.thetimes.co.uk/article/instagram-sends-predators-to-private-accounts-of-children-as-young-as-11-wqvmjc2df

[3] The Anti-Vaxx Industry: How Big Tech powers and profits from vaccine misinformation, Center for Countering Digital Hate.

[4] Young People Losing Millions to Addictive Gaming – REPORT, Safer Online Gambling Group, August 2019.

[5] https://5rightsfoundation.com/uploads/Pathways-how-digital-design-puts-children-at-risk.pdf

[6] In November 2020, the Centre for Data Ethics and Innovation conducted a review into bias in algorithmic decision making and made recommendations to the government and regulators designed to produce a step change in the behaviour of organisations making life changing decisions on the basis of data.

[7] UNICEF’s Draft Policy Guidance on AI for Children is designed to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights. The policy guidance explores AI and AI systems and considers the ways in which they impact children. It draws upon the Convention on the Rights of the Child to present foundations for AI that upholds the rights of children.

[8] The IEEE (Institute of Electrical and Electronics Engineers) has a global Initiative on the ethics of autonomous and intelligent systems. Its aim is to move from principles to practice with standards projects, certification programs, and global consensus building to inspire the ethically aligned design of autonomous and intelligent technologies.

[9] Ada Lovelace Institute are developing tools to enable accountability of public administration algorithmic decision-making, such as a typology and a public register.

[10] https://www.coe.int/en/web/artificial-intelligence.

[11] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/#:~:text=Automated%20decision%2Dmaking%20is%20the,created%20profiles%20or%20inferred%20data.&text=an%20online%20decision%20to%20award%20a%20loan%3B%20and

[12] Article 1 of the United Nations Convention on the Rights of the Child states “a child means every human being.”

[13] General comment No. 25 (2021) on children’s rights in relation to the digital environment.

[14] General Data Protection Regulation

[15] Age Appropriate Design Code

[16] For example, advice from the Chief Medical Officer or guidance from the Department of Education.

[17] https://www.statista.com/statistics/507365/worldwide-information-technology-industry-by-region/