WRITTEN EVIDENCE SUBMITTED BY PUPILS 2 PARLIAMENT

(GA10096)

 

 

 

  1. This submission reports the views of 93 school pupils aged 9 to 17.  Pupils felt they had very limited knowledge about AI.

 

  1. Pupils were barely positive about the use of AI, and almost negative about its significant increased use in the future.

 

  1. They rated their positivity towards 30 specific areas of application for AI.  They were most positive about its use in monitoring climate change, helping disabled people and space exploration.  They were least positive about its use in creative work, in self-driving vehicles, and in control of public transport.

 

  1. Pupils saw the main positive features of AI as its future potential, its speed, efficiency and accuracy, its ability to function continuously (unlike humans), potential to avoid human bias in decisionmaking, conduct of activities that would be risky for humans, role in extending human capability, job creation, economic benefit, and ability to provide help to humans.

 

  1. They saw the main negative features of AI as the risk of malfunction, concern that AI might develop out of human control to harm or even take over from humans, threat to jobs, the risk of over-reliance on AI, loss of human interaction, demand for huge amounts of data, risk to privacy, its over-rapid development, and cost.  Pupils also expressed a general lack of trust in AI.

 

  1. Pupil proposals for making AI more transparent and explicable included teaching AI in schools and to adults, more information on social and other media, and provision of unbiased information on .gov.

 

  1. They perceived current controls of AI as midway between ‘in the middle’ and ‘not much controlled’, and thought it should be more controlled.  However, stronger controls would only slightly increase their own positivity towards AI.  Controls should be set out in law rather than guidance.

 

  1. Pupils proposed 15 new future controls on AI, including specific restrictions, risk reduction, requirements upon system creators and users, inbuilt limitations, and creation of new offences.

 

  1. On review and scrutiny of decisions involving AI, we asked pupils to consider its use in medical diagnosis.  They strongly favoured a future position which combines human and AI roles in diagnosis, rather than reliance on either alone.  This would bring together the advantages of both, and increase the likelihood of correct diagnosis, as well as placing the ability of humans to explain reasoning empathically alongside the less explicable operation of AI.

 

  1.                     Pupils linked transparency of decisionmaking with trust in the decisions made.

 

  1.                     They put forward 10 proposals to improve the future transparency of AI in decisionmaking, including human oversight, research (including research by AI itself) into the operation and explicability of AI systems, definition of contributions of both human ‘gut feeling’ and unknown AI processes in decisions, and introduction of self-validation by AI systems.

 

INTRODUCTION

 

  1.                     Pupils 2 Parliament is an independent project, working with schools to gather the views of pupils for submission to Parliamentary inquiries and government consultations.  We have been granted permission to use the term ‘Parliament’ by the Clerks of Both Houses of Parliament.

 

  1.                     School pupils have a unique perspective on issues, and deserve to be heard as current citizens with a strong stake in the future, a strong sense of fairness and the capacity for fresh and often challenging thinking on even complex policy matters.

 

  1.                     This submission reports the views of 93 pupils and students in two schools:  Bishop Hereford’s Bluecoat School, Hereford (a secondary school), and Presteigne Church in Wales Primary School in Powys. The pupils and students ranged in age from 9 to 17.

 

  1.                     If it is helpful, this submission (including direct quotations from pupils) may be freely referenced and quoted in relation to the Committee’s Inquiry and other work on the coming White Paper.

 

 

MAKING THE USE OF AI MORE TRANSPARENT AND EXPLAINABLE TO THE PUBLIC – current knowledge and acceptability of future AI developments

 

  1.                     As a baseline for the Committee on children and young peoples’ knowledge and view of AI, we asked our pupils to rate their current knowledge about AI on a five-point scale from 1 ‘hardly anything’ to 5 ‘a lot’.  They reported knowing ‘not much’ about AI, with a mean score of only 1.9.

 

  1.                     In considering transparency and explanation, it is probably also helpful for the Committee to know how positive or negative children and young people feel about the use of AI.  On a five-point scale of positivity about AI, from very negative (scoring 1) to very positive (scoring 5), these pupils’ mean positivity rating was 2.43 out of 5:  roughly midway between ‘in the middle’ and ‘positive’.

 

  1.                     They were slightly less positive about the use of AI significantly increasing in the future.  On the same five-point scale, their mean rating for this was 2.26 out of 5 – closer to ‘negative’ than to ‘in the middle’.

 

  1.                     Overall, and contrary to many beliefs, children and young people (or at least, those we asked, independently and without leading them either way), were thus barely positive about current use of AI, and almost negative about its significant increased use in the future.  There is clearly an acceptability issue, which is important to know in considering transparency and explanation.

 

  1.                     The pupils’ positivity or negativity varied widely when considering different applications of AI.  We asked them to rate how positive they felt about 30 different AI applications (taken from Parliamentary and Government listed resources), again using a 5-point scale from ‘very negative’ (scoring 1) to ‘very positive’ (scoring 5).

 

  1.                     It is significant that the pupils did not rate any of the applications in the fully positive range.  The applications are listed below in descending order of the pupils’ positivity ratings, with their scores out of 5 given in brackets:

 

  1. Monitoring climate change  (3.29)

 

  1. Helping disabled people – selecting aids and using ‘intelligent equipment’ (3.20)

 

  1. Space exploration  (3.19)

 

  1. Prediction of natural disasters such as floods, earthquakes and volcanic eruptions  (3.10)

 

  1. Scientific research  (3.09)

 

  1. Police work such as detecting crime, predicting likely locations for crime, and predicting those likely to commit crimes  (3.08)

 

  1. Education – assisting students to learn  (3.07)

 

  1. Facial recognition – eg finding criminals in crowds and identifying faces at airports or in schools  (3.05)

 

  1. Predicting the weather  (3.05)

 

  1.         Countering cyber-crime  (3.04)

 

  1.         Translating speech and text  (2.99)

 

  1.         Monitoring the natural world  (2.97)

 

  1.         Smart speakers in the home  (2.96)

 

  1.         Countering Covid  (2.92)

 

  1.         Manufacturing and factories  (2.87)

 

  1.         Smartphones and home computers  (2.84)

 

  1.         Agriculture – eg identifying need for attention to crops, selecting best crops for different conditions  (2.84)

 

  1.         Gaming  (2.83)

 

  1.         Developing new medicines  (2.79)

 

  1.         Marketing  (2.68)

 

  1.         Controlling energy production and use  (2.66)

 

  1.         Health care and treatment – eg diagnosis and individualisation of treatments  (2.58)

 

  1.         Defence  (2.56)

 

  1.         Use of drones to monitor what is happening  (2.52)

 

  1.         Banking and finance  (2.43)

 

  1.         Fair sentencing of convicted criminals  (2.34)

 

  1.         Selection of candidates for employment  (2.32)

 

  1.         Creative work such as making music, creating art works and writing poetry  (2.29)

 

  1.         Self driving vehicles  (2.14)

 

  1.         Control of public transport  (1.93)

 

  1.                     We hope that this list will give the Committee a baseline rating of relative acceptability of AI applications to children and young people, for use in considering both future explanation, and in informing prioritisation for future regulation and control.

 

 

MAKING THE USE OF AI MORE TRANSPARENT AND EXPLAINABLE TO THE PUBLIC – children and young peoples’ views

 

  1.                     As well as the ratings presented above, we also asked pupils precisely what they saw as positive or negative about the use of AI, and about the development of particular applications of AI.

 

  1.                     These were the main themes in their reasons to be positive about AI (in no particular order):

 

a)    Future potential – eg “one day (it) may help create a cure for almost every disease”, “AI is good for helping us advance for the future”,  “I feel if we can develop more we can achieve more”

 

b)   Speed of operation – eg “because it can give you an answer faster than a human”,  “AI is faster than Humans”

 

c)    Efficiency and accuracy – eg “increased efficiency, reliability”, “more accurate with almost everything”, and in surgery, “can be more precise than a human with less risk on going wrong”

 

d)   Continuous operation – eg “AI does not stop and never gets tired”

 

e)    Avoidance of human bias in decisionmaking – eg “can make decisions quicker than the human brain, and there is no emotion involved in making that decision”

 

f)      Enabling activities that would be risky for humans – eg “it can do dangerous jobs so people don’t risk their lives”

 

g)   Extension of human capability – eg “can do things that humans can’t even dream of doing”,  “it can do stuff that humans are not capable of doing”

 

h)   Job creation – eg “it can create jobs in software industries”

 

i)      Economic benefit – eg “it could possibly reduce work loads in some professions.  It may also mean budgets for places could stretch further and (funds) could be used in other areas”

 

j)      Benefits for humans – eg “it can reduce menial tasks”, “makes life easier for humans”, “can save a lot of time for humans”, “it can help old or disabled people”.

 

  1.                     These were the main themes in the pupils’ reasons for being negative about AI (again, in no particular order):

 

a)    Risk of malfunction – eg “a slight fault could cost people’s lives”,  “can malfunction because of human error”, “if trained with incorrect information, will likely be more prone to make mistakes”,  “viruses”,  “if it goes wrong it could end badly”.

 

Some pupils expected AI to malfunction because of their own experiences of computing and information technology – eg  “because when the internet is not working it is sometimes quite annoying”, and even experience in completing our online survey – “my friend can’t log in”.  One pupil voiced the belief that “they can make mistakes like a human”.

 

b)   A widespread worry that AI may harm or take over from humans – eg “might rule the world if they get smart enough”,  “AI could become evil and humans won’t be needed to do anything”,  “it will become alpha one day and overtake us”,  “I think we should have (AI), but it is definitely going to take over this world one day”, “they might turn against us”,  “AI can learn to be bad, for example, devise their own language, learn to do things like launch bombs via the internet”.

 

One view was however that AI can be expected to be capable of inbuilt self-control to counter such risks, and of developing its own ethical monitoring – eg “if it’s so smart, it will know right from wrong”.

 

c)    An inherent feeling of untrustworthiness of AI compared with humans – eg “because it isn’t a human brain and it is more likely to make a mistake”,  “it is nowhere near as trustworthy as a human brain”,  “I don’t think it is trustworthy enough to be used a lot in the future”

 

d)   Concern that AI may develop beyond human control – eg   “use to control our lives - possibility of losing control of AI”,  they could stop doing as they are told and may start to do wrong things”

 

e)    AI may become more effective than humanity, with unknown consequences – eg “if it sees something wrong with humans in general, it may try to change things … and humans have no control over what it is doing”

 

f)      Risk of over-reliance on AI – eg “if we are too dependent on AI, the loss of it would be devastating”,  “we might only rely on AI with our jobs”

 

g)   Threat to employment – eg “taking over a lot of jobs”, “people could lose their jobs because of it”,  “they will someday steal all the jobs from people”

 

h)   Loss of personal interactions – eg “AI might replace human interaction towards students in schools”

 

i)      Engendering human laziness – eg “it can cause people to be lazy and let the computer do everything for them”

 

j)      Potential for misuse (it can’t protect itself from human misuse – “can’t think for itself so easily controlled”),  “people can do bad things with it”,  “could be used for the wrong things”

 

k)    Risk of hacking

 

l)      Demand for increasing quantities of data – eg “it can need a huge amount of data”

 

m) Risk to privacy – eg “they know all your information and you don’t know where it is”;  one pupil wrote that their school computer system had been hacked “and now my address and full name plus a photo of me is now on the dark web – so when will AI be strong enough to keep our info safe, huh?”

 

n)   Cost – eg “may lead to smaller budgets for companies to use as they would feel as though they needed some artificial intelligence in their workplace”

 

o)    Concern at rapidity of development and deployment of AI – eg “it is growing too fast”.

 

  1.                     Many pupils wrote about specific applications of AI.  There were significant negative feelings about the use of AI to control vehicles – particularly in autonomously driving vehicles, including cars and public transport.  There were also privacy concerns about the increasing use of drones. 

 

  1.                     Some were however very positive about the increasing use of AI – eg “AI should be everywhere”.

 

  1.                     These three analyses of the positives and negatives across different applications of AI, quoted in full, were submitted to us by pupils:

 

a)    I don't like self driving vehicles. I would prefer a teacher over AI but it would help those with learning disabilities. Smart speakers can be used to collect information to sell. If AI did creative work, it would put off humans who want to try because there would be an expectation that they would be as good as the AI and in gaming the AI would just drain all the fun out of it as there would be this perfect fake player and no one else would stand a chance, ruining the fun.

 

b)   It starts to become apparent that many jobs are being taken over by AI. I do not agree with this because there may otherwise in the future not be enough jobs for our growing population. In teaching, I believe human interaction is important rather than students always being supported by AI. However, in areas such as agriculture it may be helpful to small farming businesses as it means they can invest in AI (rather than employing many staff) which in the long term may mean they can use their budget in more (to them) beneficial areas.

 

c)    Using AI for facial recognition can be a violation of privacy for many people. Using AI for translating written material can often be unreliable and inaccurate. Using AI for creative work can improve quality/accuracy, but it takes away creative freedom of artists/writers. Using AI for smartphones and home computers would be a massive concern for violation of privacy and would create mistrust and concerns of safety for most people.

 

  1.                     Finally in this section, we presented pupils with the list of pros and cons of increased use of AI that were listed in the introduction to the Committee’s Call for Evidence on the .gov website.  Perhaps reassuringly, nearly two thirds of the pupils rated the pros and cons presented there as evenly balanced.

 

  1.                     Pupils made a number of proposals for making AI more transparent and explainable:

 

a)    More teaching about AI in schools

b)   Dissemination of AI explanations on social media

c)    Give more balanced information about both the pros and cons of AI – and avoid presenting it as an obvious positive

d)   Give more information about AI safety and controls

e)    Focus information on the pros, cons and safe use of AI in everyday and domestic settings

f)      “Make AI seem interesting”

g)   Offer adults local classes and online courses about AI

h)   Greater discussion of AI on TV and radio

i)      Provide unbiased AI information on the .gov website

j)      Explain AI impacts on employment

k)    Give more information on the use and safety measures of AI in workplaces.

 

 

TO WHAT EXTENT IS THE LEGAL FRAMEWORK FOR THE USE OF AI FIT FOR PURPOSE?  IS MORE LEGISLATION OR BETTER GUIDANCE REQUIRED?

 

  1.                     We asked the pupils to rate how much they thought the use of AI is controlled in the UK at the moment.  Most will not have information on the control of AI – but public belief on this issue is relevant in considering the future of both the legal framework and the need for transparency and explanation, plus the need for dissemination of knowledge about nature and levels of control.

 

  1.                     Using a five-point scale from ‘not controlled at all’ (scoring 1) to ‘controlled a lot’ (scoring 5), the pupils’ mean rating of assumed current control of AI in the UK was 2.4 (almost midway between ‘not much controlled’ and ‘in the middle’).

 

  1.                     We then asked them to rate how much they thought the use of AI should be controlled.  On the same five-point scale, their mean rating of how much AI should be controlled was 2.99;  almost exactly the mid-point of the scale.  They thought AI should be subject to more controls than they believed it to be at the moment.

 

  1.                     On the same theme, we asked the pupils how positive or negative they would feel about the use of AI significantly increasing in the future if it was subject to strong and effective legal controls.  On the five-point scale, their mean positivity rating about increased future use of AI was 2.48;  almost the midpoint between ‘negative’ and ‘in the middle between positive and negative’.  The addition of strong legal controls only slightly raised their positivity rating about increasing use of AI, from 2.43 to 2.48.

 

  1.                     Significantly, even with strong legal controls, the pupils remained more negative than positive about increased future use of AI in this country.

 

  1.                     On the question of whether future increased controls should be mandated in law, or set out in guidance, just over 50% of the pupils thought controls require the strength of law rather than guidance, as compared with 18% who favoured guidance.  A further 18% thought it wouldn’t matter whether controls were contained in law or in guidance.

 

  1.                     We asked pupils to propose future controls on AI for the consideration of the Committee.  Their main proposals were:

 

a)    Restrictions on how many jobs can be replaced through AI

 

b)   Requirements for AI applications to be benign (eg painless, kind and controlled)

 

c)    Restrictions on use of AI which would imperil privacy or data security

 

d)   Creation of an offence for use of AI for purposes other than life improvement

 

e)    Make it an offence in itself to use AI for criminal purposes

 

f)      Greater countering of risk of malfunction

 

g)   Greater countering of risk of hacking

 

h)   Requirement for AI-equipped technology intended for use in the home to be capable of being fully switched off, and for the capabilities of such equipment (eg to hear or record sounds or images) to be very clearly declared to potential users

 

i)      Requirements for a human veto over decisions made by AI, (eg in sentencing in the justice system)

 

j)      Requirement for positive permissions to be obtained before AI software or connections can be incorporated into equipment

 

k)    Age limits on use of some categories of AI-equipped technology

 

l)      Increased public education and awareness of controls and of necessary risks and precautions of AI use

 

m) Specify some uses of AI, such as facial recognition, as requiring special controls

 

n)   Require creators of AI systems to build limitations into them

 

o)    Regular checking on the operation of AI systems.

 

  1.                     One pupil wrote their justification for additional controls over AI:  “because artificial intelligence DOES have its downsides, it should be controlled more than it is at this current time”.

 

REVIEW AND SCRUTINY OF DECISIONS INVOLVING AI

  1.                     Testing their negativity about increasing future use of AI, we asked the pupils to consider decisionmaking involving AI in medical diagnosis.  We asked them whether, if they needed a medical diagnosis of a medical problem of their own, on which their treatment would be based, they would most trust a well-trained and tested AI system, a well-trained and experienced doctor, or a combination of the two to make the diagnosis.

 

  1.                     The findings were very clear.  46% of the pupils stated that they would trust a combination of a human doctor and an AI system working together in reaching a diagnostic decision.  This compares with 33% who would trust a doctor working without AI support, and with only 10% who would trust an AI system alone.  It is also significant that the great majority of pupils gave a clear answer to the question – few (10%) took the offered option to say ‘I really don’t know at the moment’.

 

  1.                     There was a recognition that an inherent mistrust of AI in the medical field is perhaps irrational, even if it is strong:  “I wouldn’t feel right being treated by an AI, I would be scared that my life is in the hands of a robot, but it would probably be better and more efficient than a doctor”.

 

  1.                     Overall, 57% of the pupils saw a clear role for the use of AI in diagnosis and the choice of treatment, but primarily in combination with a human decisionmaker rather than relying on AI as decisionmaker.  

 

  1.                     Importantly, the pupils were in favour of the use of AI as an aid or adjunct to human decisionmaking, but would not trust a decision made by AI alone.  That is a significant finding, especially given the issues of lack of transparency of AI decisionmaking processes and difficulty in challenging AI decisions.

 

  1.                     To explore this further, we asked pupils if they could share their reasoning in responding to this question.  Almost seven out of ten (69%) of those who had expressed a view on the question argued the case for their view.

 

  1.                     One key reason given for supporting the use of AI as an adjunct to, rather than replacement for, human decisionmaking was that decisions with both human and AI input played to the advantages of both, and would probably be more likely to be correct and free of the mistakes or biases of either.  As one pupil summed this up, “having a combination of both trained AI and trained doctor would produce the best chance of accurate diagnosis”.  Another simply wrote “2 is better than 1”.

 

  1.                     A second key reason was one that concerns the Committee; that AI decisionmaking is not explicable and transparent, whereas humans can (usually) declare their reasoning.  As one pupil put this in relation to use of AI in diagnosis, “I feel I could trust a doctor more as they would be able to answer the questions I have as they have the understanding on what they have seen, where AI gives the diagnosis but doesn’t tell you how it found it”. 

 

  1.                     Another made a differentiation on severity of health concern, writing “for certain diagnoses, especially life changing ones, I would prefer a doctor who can explain how they came to this diagnosis, but for more minor things I would trust AI”

 

  1.                     In their submissions, pupils clearly linked transparency of decisionmaking with trust in the decisions made.  A decisionmaker able to justify decisions is “therefore more trustable”.

 

  1.                     Another pupil described diagnostic decisionmaking by AI alone as “scary, distant and weird”.

 

  1.                     A third key reason given for not wanting decisions to be made by AI alone was that of inherent lack of trust in technology – eg “I don’t trust the machine”. 

 

  1.                     There was also a concern that the skill of AI technology is limited to a specific issue, and AI is likely to come across situations that a human doctor can deal with but which are outside the capability of a particular AI system.  As one pupil put it, “it might be something the AI has never come across before and it might not know how to deal with it”.

 

  1.                     A fourth key reason was the wish for human empathy and understanding for fellow humans to be coupled fully with any benefits of speed and accuracy of AI decisionmaking.  “The doctor can tell me why and be empathic.”

 

  1.                     A fifth key reason was again the concern that AI alone may be subject to hacking and malfunction, which coupling with a human decisionmaker can counter.

 

  1.                     The sixth key reason given was the general lack of knowledge about AI, what it can and cannot do, and how it is controlled.  There is thus a concern about the unknown.

 

  1.                     It is worth quoting one pupil who put forward the view that it is always right for humans to rely on humans – “I think we’ve got too much AI, and we should rely on human kind”.

 

  1.                     We asked pupils to put forward proposals for how use of AI could be more closely monitored and rendered more challengeable in the future.  Here is their list of key proposals for the Committee’s consideration:

 

a)    Make all uses of AI subject to human expert oversight and control

 

b)   Train AI to define the category of analysis it has used to reach a decision – for example, although it may not be able to determine what pattern it has identified in examining cells for potential cancers, it should be able to declare that it has based its decision on pattern recognition

 

c)    Differentiate between human and AI contributions to decisionmaking where AI has been used to make a decision alongside a human – requiring the human to state both their reasoning, and where they have relied on AI input

 

d)   Acknowledge both human ‘gut feeling’ and AI ‘unexplained contribution’ as areas of decisionmaking that cannot be fully transparent.  One comment on this theme was “AI has to be created by a person, so it can be an assistant – but sometimes a gut feeling is the best thing that can happen to a patient”

 

e)    Require statements by creators of AI systems about how they are intended and designed to work and reach decisions

 

f)      Conduct more research on understanding AI – “instead of trying to improve the AI, I think you should get more understanding of the way it processes and thinks”,  “if we continue to study AI we should be able to try and understand it more”

 

g)   Further than this, use AI itself to carry out research on how AI systems make decisions and how this can be explained in human terms (which could also create “a new line of jobs to analyse algorithms” and enhance the ability to “detect and prevent any bias, or waver in any data, possibly saving lives”, as well as “advancing the country’s understanding and development of AIs and computer software”)

 

h)   Require AI systems to self-validate by “making them repeat their own processes”

 

i)      “Make sure that AI isn’t being manipulated by a third party with unpleasant ambitions”

 

j)      And finally, one pupil’s simple caution on increasing use of AI:  “be careful”.

 

 

(November 2022)