Written Evidence Submitted by Mr Shane Mason
(GAI0006)
Abstract
This paper is an evidence submission in support of the 2022 UK Parliamentary inquiry entitled ‘Governance of artificial intelligence (AI)’. The call for evidence solicited answers to several questions. This paper provides input to the question: “How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?”
This paper focuses on the first part of the question and argues that regulation for the use of AI is an inadequate measure, and that regulation for AI should rather focus around addressing the issues of the trustworthiness and transparency.
To build this argument, extracts have been derived from an unpublished essay entitled: “Can corporations be trusted to manage intelligent machines?” written by the author in May 2022 and submitted to the Department of Politics & International Studies at the University of Cambridge. The term ‘corporation’ is used throughout and is a catch-all term for legally recognised incorporated organisations of both the private and public sectors.
About the author
Shane Mason has previously held senior management roles including Head of Strategy for defence systems & technology with Babcock International Group PLC, alongside executive engineering roles for BAE Systems PLC. He has worked closely with senior technologists and engineers up to and including Chief Technology Officers and Engineering Directors over the last several years.
His expertise is in grand strategy and world affairs, with an emphasis on the material capacity of nation states. Whilst a visiting practitioner with the University of Cambridge, he is simultaneously completing a postgraduate master’s degree in international relations with the University’s Department of Politics & International Studies. He also holds an MOD sponsored MBA.
Shane is currently an independent consultant and advisor who has appeared in podcasts; provided pro-bono briefings to several MPs, SPADs and private individuals on Russian political history and grand strategy; appeared on talkRADIO, and sat on panels discussing global trends with peers from Google, USA Today and the UN.
----- Go to next page -----
Question
“How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?”
Answer
The use of AI is not an adequate measure to be regulated as it does not address the issue of greater trustworthiness being needed in society for the use of data. Neither does such an approach address the need for greater transparency from corporations who develop or use AI products, systems, or services.
A decline in societal trust for organisations who gather the data of individuals who are becoming more aware their data is being used for unintended purposes stems from the private sectors pursuit of creating profit from adopting a model of market creation[1].
However, it is important not to demonise the private sector or organisations which collect data and apply that data to AI algorithms. It must also be observed that products, systems, or services which seek to act intelligently by replicating, outperforming, or influencing humans can also serve human endeavours and create social good. This includes products, systems or services developed by the private sector.
An example can be drawn from SpaceX where it’s Starlink satellite communications system, which uses data to automate its configuration, was provided to Ukraine to aid persistent communications when defending themselves against invading Russian forces in early 2022. The automation of its system introduces limits on the human intelligence required for the systems operation. Ultimately, the performance of the Starlink system aided Ukrainian military operations. In this brief example, Starlink uses data and algorithms which can act intelligently. However, the data isn’t derived from members of the public.
Notwithstanding this, the result of intelligent algorithms in the Starlink system demonstrates the capacity for how innovations in systems to replace or replicate human processes can impact global society, and possibly even world order.
STARLINK Safety and Security
1
Recognition for the power which artificially created intelligence can hold over society can be traced back as far as Thomas Hobbes work, Leviathan, in 1652[2]. This work illustrates how the creation of alternatives to either replicate or replace human decision-making has been a long-standing philosophical consideration, and not solely a pursuit of technological superiority or innovation.
It is important to hypothesise that as technology evolves to wider and more intelligent applications, so too will the capacity to artificially replicate human thought. This has already commenced to the extent of pursuing control over people’s behaviours.
Alphabet Inc’s internet search platform, Google, rationalises vast amounts of information and delivers the user a summary of the information sources relative to the end-user based upon their variables such as search history and location. This algorithm was the platforms basic form of intelligence. But as Google grew their user base, it began to gain access to more users and information. Eventually, Alphabet Inc, developed the platform to include its own email service, Gmail, which linked to a Google account containing information such as age, sex, and general interests.
Tristan Harris, Former Design Ethicist for Google, and Co-founder of the Centre for Humane Technology[3] was working on the Gmail platform during its early stages of product development. His task was to make the Gmail platform addictive for its users. He believes the tech industry ‘lost its way’ when it started engineering user addiction into the ergonomics of human interaction.
Harris has since openly shared his concerns around Google’s business model and use of data applied against algorithms to replicate and pre-empt human thought, stating, “The classic saying (at Google) at the time was: if you are not paying for the product, you are the product.” (Orlowski, 2020)[4].
The example of Google and the development around its Gmail platform demonstrates a symbiotic relationship between technology development and the incremental increase of capabilities to the point of human influence.
The issue here lies in the AI design and development process of the products and systems being innovated which use data to ‘power’ the AI algorithms. The Starlink system uses data, as does Google, but a key difference is that one system uses the data of private individuals whilst the other does not.
This leads to the observation that corporations which gather data from human characteristics or human behaviour – especially when that data is harvested from interface with customers or service users – requires greater transparency.
However, it is fair to recognise that Google has a significant positive impact on global society by helping make information accessible to many people across the world, including to those in developing societies.
Findings & conclusion
Corporations which develop and design technological solutions to enhance human endeavours play a significant role for humanity. This is due to the agency which corporations provide in the creation of social facts which become norms in society. Innovations such as the motorcar, the mobile phone, and the internet impacted the world to such an extent that it is difficult to imagine a world without them; they are all now acceptable practices in the daily lives for billions of people; they were all engineered by free market enterprise and corporations; and they all require appropriate regulation.
This leaves a moral burden on corporations and governments to ensure the development of AI holds an equal amount of social responsibility. This also requires government and corporations to work together as both regulator, enabler, and innovator.
This paper finds the trust in intelligent products, systems or service needs to directly correspond with the trust society has for the corporations which develop the products, systems and services which use personal data for AI. Further, as the example with Starlink demonstrates, any regulation should not be limited to only corporations which use personal data; but corporations with products, systems of services which act intelligently regardless of where that data comes from. This is due to the impact such corporations can have on global society.
Therefore, any corporation developing products which aim to shape or lead a customer or user market have a likelihood to help global society; and it is these corporations which have a higher form of agency, and a higher level of social responsibility.
We finally turn to the moral and philosophical argument where any organism or organisation, artificial or not, which has a direct influence over society must be legitimately recognised by its adopters and users – particularly in established democratic nations. Not having some form of legitimisation from society risks their endorsement to operate, as whatever we acknowledge gains the legitimacy to exist within our world[5]. Not doing so presents a risk to the validity and attractiveness of corporations operating within the UK, or legally incorporated within the UK and operating elsewhere.
To conclude, this paper finds the inseparable issues of trustworthiness and transparency are important for the regulation for AI. Therefore, a regulatory system which solely focuses on the use of AI is not adequate to provide the level of utility needed for corporations which seek to innovate to the extent where high levels of agency and social responsibility exist.
----- Go to next page -----
Recommendations
In light of these findings, the following recommendations are made to the Inquiry for further consideration:
----- End of submission -----
(October 2022)
[1] Peter Drucker agues the ‘only valid definition of business purpose (sic) is to create a customer.’ He goes further towards a model of market creation by reinforcing that ‘Markets are not created by God, nature or economic forces, but by businessmen.’ (Drucker, 2006, pp.34–35)
[2] The introduction of ‘Leviathan’ recognises the state, or commonwealth of state, as an artificial creation by man of a collective society and goes as far as likening the mechanics of state to human bodily functions which are then explored further in the rest of the book. This is generally accepted and recognised as the first reference to an artificial form of life, particularly in reference to political governing.
[3] www.humanetech.com. (n.d.). Center for Humane Technology. [online] Available at: https://www.humanetech.com.
[4] Harris goes into detail describing, “Never before in history, had fifty designers (sic) made decisions that would have an impact on two billion people. Two billion people will have thoughts they didn’t intend to have because a designer at Google said, “This is how notifications work on that screen you wake up to in the morning.” Jeff Orlowski (2020). The Social Dilemma | Netflix. Exposure Labs, The Space Program, Agent Pictures. Available at: www.netflix.com
[5] Redhead, S. (2016). Life Is Simply A Game. Steven Redhead.