Written evidence submitted by defenddigitalme (OSB0188)

 

About defenddigitalme             
defenddigitalme is a campaign group for children’s data privacy and digital rights formed in 2017 in response to concerns from parents and privacy advocates about increasingly invasive uses of children’s personal data in education. We support safe, fair and transparent data in education, in England and beyond. More information: http://defenddigitalme.org/


 

 

We have been involved in three years of online harm discussions, round tables, meetings and debate including with the Department of Culture, Media, Digital and Sport. We are responding only to the principle within our specialist field, school safety technology and child rights, in response to the consultation Call for Evidence[1]

 

(a)   What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?

(b)   the questions on algorithms and user agency

 

We note that the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. We have researched school safety tech since 2016. The term used to promote the market by the DCMS[2] covers a wide range of different technologies under the same umbrella, but all in essence, monitor every user activity, bypassing all system protections regardless of encryption with access to all screen content, video, camera and chat functions, many offline as well as online, 24/7 365 days a year, inside and outside school settings.

 

While there are plentiful questions such as the scope of what harmful content or activity the framework would apply to and how and why it would apply to a webpage and not a book in print, or online, we leave these further areas for others to address.

1. Recommendations

 

  1. Safety-by-design in online harms is a vague term generally defined. From what, for whom? At what point in time? This term is widely used for rather different technology.

 

  1. The inherent risks of algorithmic content moderation and behavioural monitoring are ignored in the Bill. The current standards of existing practice of content moderation and surveillance, censoring / blocking and behavioural monitoring are opaque and yet the Bill intends to require more of this unknown. Some of what the Bill proposes risks making what is unlawful today, lawful, including general monitoring activity.

 

  1. There is a chilling effect on digital participation when users know their activity is watched. Even the NSPCC acknowledges on its Childline website (screenshots at the end) the risks and harms of digital monitoring for children including access to trusted services. The chilling effect of surveillance on participation online is well documented. Younger people and women are more likely to be negatively affected.[3]

 

  1. There is a systemic need for change and improvement across the areas of safety by design, algorithmic recommendations, minimum standards, and default settings.

 

  1. A code of practice is required for the automated monitoring of children’s digital activity. We recommend an amendment to guarantee SafetyTech is not used invasively by public authorities to intrude on or surveil children’s digital activity.

 

  1. The code should include obligations on companies such as on the reporting of error rates, blocking volume, type of sites and keywords triggering children to be profiled by such systems. The number of children flagged with what activity as well as how many times a photograph or video was recorded of a child must be transparently available figures. And the current practice of taking covert photos or video footage of children by safety tech, as well as ‘outside hours’ and premises should be banned.

 

  1. We make seven key recommendations for policy and eight for practice for consideration in such changes should such a Code of Practice be accepted as an amendment.

2. Children’s right to be heard in the consultation (agency / autonomy)

 

  1. We also ask that the Committee considers how it will address Article 12 of the UN Convention on the Rights of the Child which provides for the right of children to express their views in every decision that affects them. Any decision that does not take into account the child’s views or does not give their views due weight according to their age and maturity, does not respect the possibility for the child or children to influence the determination of their own best interests.

 

  1. The fact that the child is very young or in a vulnerable situation (e.g. has a disability, belongs to a minority group, is a migrant, etc.) does not deprive him or her of the right to express his or her views, nor reduces the weight given to the child’s views in determining his or her best interests. They should also be able to do so in anonymity.

 

  1. The Committee should also be alert to the bias inherent in responding groups or individuals that represent children or parents from particular positions of authority: whether in child protection, or otherwise, that they do not represent all parents or all children or those without, whereas the reach of the Bill will affect everyone, of all ages. We ask the Committee to consider how it will address this weighting and bias in its own consultation approach.


3. Foreseeable problems of algorithms used to fulfil safety duties

 

  1. There are many significant issues inherent in today’s practices using safety tech, including security of passwords and access to banking[4] or other sensitive data by companies, and deeply invasive practices.

 

  1. Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. We believe that policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children’s lives.

3.1 Algorithmic quality and practice is only measured by providers themselves

 

  1. The quality of systems is opaque and controlled by companies. The poor standard of data quality also plays a role in algorithmic decision making and is well recognised in the area of child protection and yet any risk in this area is not known since systems are closed. As Emily Keddell, Senior Lecturer at the University of Otago in New Zealand, says of child protection risk prediction systems,

 

    1. "the data used to inform such models are incorrigibly suspect. Attempts to improve it lead to increasingly intrusive data use and challenges to legal equity. When such tools over-identify those least able to refute their ‘high risk’ label, we should all be concerned."[5]

 

  1. The Illinois Department of Children and Family Services shut down its Chicago algorithmic child abuse prevention program in December 2017, after the data mining software failed to flag at-risk children who died while swamping caseworkers with alerts that thousands of others were at imminent risk of death. The agency's director called the technology unreliable. “Predictive analytics [wasn’t] predicting any of the bad cases,” Illinois Department of Children and Family Services director Beverly Walker told the Chicago Tribune newspaper. “We are not doing the predictive analytics because it didn’t seem to be predicting much.

 

  1. Procurement of algorithms for predictive risk assessment has no consistent transparent sector standards in practice. These systems scan for mental health, terrorism and other highly sensitive subjects with no consistency, openness to scrutiny, or independently audited minimum health and safety standards.

3.2 The effects on children’s agency and autonomy

  1. Children are denied agency in school safety tech. We know of cases in which nine and ten year olds have had their photos taken via webcam by school computers without their knowledge or parental permission. See also Robbins v. Lower Merion School District (2010) U.S.             
     
  2. Some monitoring activity can be extremely invasive. Systems claim to be anonymous but are not and reveal why in their own marketing materials.             
     
  3. A public case study is published in full in eSafe’s online marketing materials.[6] We are concerned it may risk exposing information that would cause distress to the individual or those known to her (or readers) so redacted the college name, incident, and full name of the school staff member in this text extracted from the image

"X Sixth Form College has relied on eSafe to protect users for the last 6 years. Mental Health & Deputy Safeguarding Officer X can still recall one of the first serious incidents the service detected. A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed XXXXXXX. Despite the device being used offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene."

3.3 What is the role of parents and family?

 

  1. Article  3(2)  of the UNCRC provides: “States Parties undertake to ensure the child such protection and  care  as  is  necessary  for  his  or  her  wellbeing, taking  into account  the  rights  and  duties  of  his  or  her  parents,  legal guardians,  or  other  individuals  legally  responsible for  him  or her, and, to this end, shall take all appropriate legislative and administrative measures.” (our emphasis)             
     
  2. Filtering, monitoring and blocking technology is currently used without account of the rights and  duties  of  his  or  her  parents and guardians, and may in fact infringe upon their rights where monitoring extends into the home and child’s personal life, such as communications exchanged between parents and children.

 

  1. In the case of The Christian Institute and others (Appellants) v The Lord Advocate (Respondent) (Scotland) the Supreme Court judgement noted Article 8 of the UNCRC, States Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognized by law without unlawful interference and the importance of determining whether a less intrusive measure could have been used without unacceptably compromising the achievement of the objective, and that “the privacy of a child is also an important test.”

 

  1. The UNCRC rights and duties of parents in Articles 5 and 18 are also ignored in the use of safety tech and are relevant for the wider context of the Online Harms Bill.

 

  1. States Parties shall respect the responsibilities, rights and duties of parents or, where applicable, the members of the extended family or community as provided for by local custom, legal guardians or other persons legally responsible for the child, to provide, in a manner consistent with the evolving capacities of the child, appropriate direction and guidance in the exercise by the child of the rights recognized in the present Convention. (our emphasis)

 

  1. Article 18(1)

States Parties shall use their best efforts to ensure recognition of the principle that both parents have common responsibilities for the upbringing and development of the child. Parents or, as the case may be, legal guardians, have the primary responsibility for the upbringing and development of the child. The best interests of the child will be their basic concern. (our emphasis)

3.4 Children’s rights depend upon evolving capacity not age

  1. The term “children” refers to all persons under the age of 18 within the jurisdiction of a State party, without discrimination of any kind, in line with articles 1 and 2 of the Convention.

 

  1. Article 5 of the UNCRC recognises both the role of legal guardians and their position in relation to the State as recipients of direction and guidance, and the importance of children’s evolving capacity, in questions of agency, not age.

 

  1. Article 5 introduces the idea that children should be able to exercise their rights with agency and autonomy as they acquire the competence to do so.[7]

 

  1. Adopted by the UN Committee on the Rights of the Child in February 2013, General Comment No. 16 addresses the state obligations regarding the impact of businesses on children’s rights. It is one of the most recent pieces of international law available on business and children’s rights. It includes guidance on the measures of implementation that are required to prevent and remedy violations of child rights by business actors, and ensure business enterprises carry out their responsibilities in the realisation of the rights of the child and encourage business to positively contribute to the realisation of these rights. The General Comment is guided by the principles of the CRC throughout: the best interests of the child (article 3(1)) ; the right to non-discrimination (article 2); the right of the child to be heard (article 12) and the right to life, survival and development (article 6).

 

  1. The General comment No. 14 (2013) on the right of the child to have his or her best interests taken as a primary consideration reiterates[8], that the evolving capacities of the child (art. 5) must be taken into consideration when the child's best interests and right to be heard are at stake.

 

  1. While children are seen as in need of protection from online harms and in the online safety tech agenda, there is insufficient attention paid to preventing the potential for more exploitation and risks resulting in the trade off from using technology such as age assurance and age verification.

 

  1. They often demand additional data collection from both children and their relations to verify a child’s identity. Some methods require the creation of a permanent high value, high risk biometric record with yet another commercial third-party. These are too often inappropriately suggested by policy makers as a solution for use in what are trivial transactions. Debate ignores the associated lifetime risks, or that the children’ data are then mined as a for-profit training database for the product owners.[9] (defenddgitalme, 2021)

 

3.5 Where about families' views about safety Tech?

 

  1. Not only are they deeply invasive but the arrogance of assumptions, glossed over in panel events such as this on their technology capabilities, or made in statements such as “parents want...” can be common. If those assumptions are reflected in their technology design then there could be very significant problems and yet companies are not willing to be open to scrutiny of those questions of bias in language, in conversation analysis, and in keyword library matching, for example.

 

  1. CEOs talk as if parents have no role here except to accept what the company designs and enforces upon their children’s digital activity. Neither policy makers nor companies should not assume parents want safety tech companies to remove their autonomy, and certainly reject the secrecy in which it happens today.

 

  1. Nearly 90% of parents asked in our commissioned 2018 poll  (n=1,004) said that they believe children and guardians should be informed how such monitoring works, as well as what the keywords are that trigger flags like 'gang activity' or 'suicide risk'.

 

  1. Neither should it be assumed that children want it. Recognising that children are rights holders, needs realisation and action by duty bearers. Sarah Drummond, founder of Snook, and a Google Democracy Fellowship 2011 who has done extensive work on digital Safety Tech with the Scottish government, has written, “monitoring children’s internet use is a pattern that works to protect rights, but doesn’t meet their personal user needs around privacy and trust." (Ref. Making it safer online by design (January 2020).)

 

  1. The Australian e-Safety Commissioner carried out research in 2019 and found that, “when it comes to monitoring, young people were split in their views. While the majority — 71%, believed that monitoring systems scanning messages and content in the background were helpful in preventing negative experiences, over half (57%) were uncomfortable with these features running in the background. A sizeable minority found monitoring features intrusive (42%) and were unsure about their effectiveness in ensuring online safety (43%)."

 

We would welcome the opportunity to address questions the Committee has on these topics.

 

 

28 September 2021

8/8


[1] Call for evidence https://committees.parliament.uk/call-for-evidence/567/

[2] ‘The UK Safety Tech Sector: 2021 Analysis’ considers growth of this sector in the UK during 2020 https://www.gov.uk/government/publications/safer-technology-safer-users-the-uk-as-a-world-leader-in-safety-tech

[3] Penney, J. W. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. IPR 6(2) https://policyreview.info/articles/analysis/internet-surveillance-regulation-and-chilling-effects-online-comparative-case

[4] Smoothwall FAQs https://kb.smoothwall.com/hc/en-us/articles/360002135724-Frequently-Asked-Questions-FAQs-

[5]  Risk prediction tools in child welfare contexts: the devil in the detail Husita, http://www.husita.org/risk-prediction-tools-in-child-welfare-contexts-the-devil-in-the-detail/

[6] Source https://www.esafeglobal.com/media/1113/esafe-monitoring-for-safeguarding.pdf permanent copy at https://defenddigitalme.com/wp-content/uploads/2019/10/esafe-monitoring-for-safeguarding1.pdf

[7] CRIN Child Rights International Network Article 5: Parental guidance and the child's evolving capacities

​​https://archive.crin.org/en/home/rights/convention/articles/article-5-parental-guidance-and-childs-evolving-capacities.html

[8] UNCRC General comment No. 14 (2013) https://resourcecentre.savethechildren.net/node/7517/pdf/1625_g.14_original.pdf

[9] The words we use in data policy: putting people back in the picture, defenddigitalme (2021) https://defenddigitalme.org/research/words-data-policy/