The Bar Council — Written evidence (NTL0048)
About us
The Bar Council represents approximately 17,000 barristers in England and Wales. It is also the Approved Regulator for the Bar of England and Wales. A strong and independent Bar exists to serve the public and is crucial to the administration of justice and upholding the rule of law.
Scope of response
This submission has been drafted by the Bar Council’s IT Panel and addresses all questions posed in the call for evidence.
Question 1: Do you know of technologies being used in the application of the law? Where? By whom? For what purpose?
- Technology is being used in the application of the law in many forms, from the use of automated artificial intelligence algorithms to the commonly used emails and word processing. Automated facial recognition technology (AFT) was, until recently, in use by police forces; predictive analytics and algorithms are being used by solicitors, and by the judiciary in the US; technology for remote hearings is used by court users including the judiciary, Litigants in Person (LiPs), barristers, solicitors, witnesses and observers (to a limited extent); and case management software is used internally by solicitors being rolled out across various jurisdictions (e.g., Crown Court Digital Case System/MyHMCTS). Enforcement agencies have had to increase their use of technology in order to keep up with the increased use of technology by lawbreakers. Obviously email and word processing are now incorporated within all legal working lives.
- Additionally, there has been a rise in the use of many online dispute resolution technologies (e.g. Smartsettle to assist in negotiations and blind binding; MicroPact’s case management applications used by the US Government) to support the different stages of dispute resolution.
- The purposes vary, but the aims include improving efficiency, increasing profit or saving cost, upholding the rule of law and promoting access to justice.
- Since this is the makeup of the legal technology landscape, in ensuring the most important aims of upholding the rule of law and promoting access to justice, it is essential that the impact of new developments is assessed to ensure fairness and equality of arms are maintained in the judicial process and that for administrative uses, there is a joined-up and transparent approach.
Question 2: What should new technologies used for the application of the law aim to achieve? In what instances is it acceptable for them to be used? Do these technologies work for their intended purposes, and are these purposes sufficiently understood?
- Any new technology being used for the application of the law should be done so with, at least, the following aims:
- To uphold the rule of law;
- To maintain and increase access to justice, leaving no-one behind
- To make current legal processes more efficient and less costly for users
- To ensure that fundamental rights are not diminished or infringed – such as the rights surrounding Legal Professional Privilege and privacy.
- The question of whether it is acceptable for technologies to be used is fact-specific to the technology and the purpose. It is therefore impossible to answer in the abstract. Technology may be developed in a purpose–neutral way – such as word processing or mobile phone technology. These can be used well or badly. Other forms of technology are purpose-specific – such as some artificial intelligence (AI) developments. While these can work well if designed for specific uses, with proper account being taken of that intended use and ethics in the design, these can also be used for other purposes. Such use may not take account of the limitations inherent in the development, or even know how the process of determination works, so that such use can lead to biased results -- at the very least there is a lack of transparency in how the output is reached. An example of the problems generated through the use of AI can be seen in the COMPAS algorithm, used by the US Judiciary, in making biased sentencing which discriminated against people of colour and led to severe injustice.
- Another example of lack of transparency and understanding of the impacts of new tech is the use of technology in replacing face-to-face with remote hearings. In this example, while not developed for the purpose, the technology has been very effective in maintaining access to justice for some during the COVID pandemic. However, there is a clear disparity in access to justice caused by access to proper resources to use this technology effectively, especially in ensuring that the technology is available to all parties. Whilst in many cases this technology has been used to improve accessibility and connectivity and therefore greater access to justice for court users, it has also meant those who do not have access to necessary technology and an effective internet connection, or have lower or no IT literacy, some older groups, those with disabilities and other vulnerabilities which make it difficult for them to engage with remote hearings, are potentially ‘locked out’ of the system.
- Accordingly, there is a clear need for evidence of the effect of the use of new technologies, to assess whether they will, in fact, work as predicted and to ensure the availability of such technology to court users, as required. For each proposed use, there will need to be an assessment as to whether changes are necessary to the technology to ensure that the aims set out in paragraph 5 above will be met. Such uses need to be assessed before implementation by the government, to ensure those aims are met. For example, the regulation of use of AIs should ensure transparency, as an obvious starting point. The Bar Council believes that, in some instances, new technologies are being used or considered for use which are insufficiently understood by users and those that will be affected by their use.
- The SHERPA PROJECT, which is an EU Horizon project coordinated by Prof. Bernd Carsten Stahl and a diverse team of stakeholders (including Shobana Iyer of the Bar Council’s IT Panel and Co-Vice Chair of the Legal Services Committee), has worked to create a set of recommendations to ensure that AI is ethical and supports human rights. The recommendations are based on the concept of three overlapping ecosystems which need to be addressed: concepts; knowledge and action; and governance.
- The full set of recommendations may be accessed here: https://www.project-sherpa.eu/recommendations/. The scenario concerning predictive policing may be of interest for discussion: https://www.project-sherpa.eu/predictive-policing/
Question 3: Do new technologies used in the application of the law produce reliable outputs, and consistently so? How far do those who interact with these technologies (such as police officers, members of the judiciary, lawyers, and members of the public) understand how they work and how they should be used?
- Again, it is not possible to answer the question about consistent, reliable outputs in the abstract.
- There is obviously a spectrum of understanding of technologies used in these user groups, but it is fairly safe to say that most members of the population have a limited understanding of how computer-based technology works. In general, this is because users do not need to understand certain technology to use it. There are thousands of patents for smartphone technology but understanding that technology is not essential to make a call, send a text, take a picture or use the apps. In general, technology should be intuitive. To the extent that is not, it is impossible to say if the outputs are reliable. Even if they are consistent, and users can come to expect certain outputs this does not confirm reliability.
- One of the problems in assessing reliability would be where there are no standards to assess it against. The government is best placed to gather evidence in relation to the use of technology in the justice system and to set standards measured against the aims in paragraph 5, but it has not taken the opportunity to do so.
Question 4: How do technologies impact upon the rule of law and trust in the rule of law and its application? Your answer could refer, for example, to issues of equality. How could any negative impacts be mitigated?
- Please refer to paragraphs 5-10.
- In addition, Iain G Mitchell, of the Bar Council’s IT Panel, recently published an article detailing how such technologies as biometric technology and similarly sophisticated AI could pose a threat to fundamental rights: https://www.lawsocieties.eu/news/biometric-technology-and-fundamental-rights-by-iain-g-mitchell-qc/6001864.article
- This is also explored in Fair Trials’ recent ‘Automating Justice’ report, which details the serious problems and lack of trust engendered by the Amsterdam Municipality’s use of risk modelling and profiling systems (1.1.2, 1.1.3): https://www.fairtrials.org/sites/default/files/publication_pdf/Automating_Injustice.pdf
- Issues of equality are tantamount to technologies working effectively. Currently, they do not. AFT is a particular concern, with the South Wales police case being one of the most high-profile examples of inherent racial bias in technology: https://www.theguardian.com/technology/2020/jun/23/uks-facial-recognition-technology-breaches-privacy-rights
- The Court of Appeal in the R(Ed Bridges) -v- CC South Wales Police and Otr [2020] EWCA Civ 1058, found that despite the adequate legal framework being in existence:
- The Police’s use of the AFT which engaged Article 8(1) of the European Convention on Human Rights, was not in accordance with the law for purposes of Article 8(2)
- Accordingly, the Police’s Data Impact Assessment did not comply with section 64(3)(b) or (c) of the Data Protection Act 2018
- The Police failed to comply with the Public Sector Equality Duty in section 149 of the Equality Act 2010 prior to or in the course of its use of live AFT.
- Further, there are still serious concerns on the accuracy of AFT and concerns over gender, race and demographic bias are still issues to be addressed: https://www.nature.com/articles/d41586-020-03186-4
- The use of some technology can improve access to justice for some. However, an inherent requirement to use technology, is highly likely to discriminate in favour of those who are better resourced and therefore able to afford it and against the groups identified in paragraph 7, above.
Question 5: With regards to the use of these technologies, what costs could arise? Do the benefits outweigh these costs? Are safeguards needed to ensure that technologies cannot be used to serve purposes incompatible with a democratic society?
- The Bar Council is not in a position to assess the potential costs or savings associated with the purchase and use of such technologies, beyond pointing out obvious costs such as software licences, hardware purchases, employment of staff, training of staff and users, and infrastructure requirements, such as increased broadband bandwidth.
- The most recent figures available from HM Courts & Tribunals Service (HMCTS) are from July 2021. These state that HMCTS allocated £102 million upgrading technology and modernising the estate (since Autumn 2019): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1006623/Courts_System_Funding_Update_-_July_2021.pdf. Given the initial Court Reform programme was costed at over £1 billion, with additional funding provided subsequently (of which the £102 million was a part), this seems a relatively small price to pay to ensure that courts are equipped with the modern technology required to deliver justice in the 21st Century.
- Safeguards should be incorporated as part of the design and creation of new technologies, as well as during their use. This is consistent with the ethos of “Privacy by Design”, supported by the ICO and data professionals.
- It is also essential that the stakeholders, including the relevant professional membership bodies (Bar Council, Law Society, CILEX) are involved in the design or adaptation of existing technologies, so that appropriate safeguards can be incorporated. Designers or adaptors without knowledge of the justice system run the risk of missing essential user information without stakeholder involvement. Once implemented, these technologies should be subject to continual assessment so that any issues are captured and eradicated before damage is caused.
Question 6: What mechanisms should be introduced to monitor the deployment of new technologies? How can their performance be evaluated prior to deployment and while in use? Who should be accountable for the use of new technologies, and what accountability arrangements should be in place? What governance and oversight mechanisms should be in place?
- As stated in paragraphs 23-24 above, there should be continual assessment; this should be measured against the achievement of the aims in paragraph 5, above. No new technology should be introduced unless there is evidence that it is safe, secure and beneficial to the justice system and its users. User testing is always helpful at the design stage and when assessing functionality. Adequate and meaningful consultation should take place with stakeholders, so as to ensure that the technology is fit for purpose.
- An example of the failure to measure the impacts of the introduction of new technologies can be seen in the recent Post Office cases concerning the introduction of a new system: https://ials.blogs.sas.ac.uk/2019/06/25/the-use-of-the-word-robust-to-describe-software-code/. The problem was less about the Post Office system itself but about the way Courts dealt with prosecutions based on a faulty IT system. The lack of transparency and equality of arms in the criminal cases caused many miscarriages of justice, the impact of which was felt in the tragic loss of human life and the distress cause to individuals and families.
- Accountability should rest with those who commission, specify, design, install and monitor the technology – moreover, there should be a properly qualified team dedicated to the oversight of the adoption of any new technology.
- Final accountability currently lies with Ministers and the executives providing oversight. This is not very effective and is not independent. An independent regulator may be necessary. However, where the use of the technology is covered by existing regulation, then oversight will fall to that Regulator. e.g., the Information Commissioner for personal data.
Question 7: How far does the existing legal framework around new technologies used in the application of the law support their ethical and effective use, now and in the future? What (if any) new legislation is required? How appropriate are current legal frameworks?
- For some technologies, such as AI, it is not clear that there is an effective existing legal framework. Where there is legislation, this will have been created with policy objectives in mind, and, it is to be hoped, the creation of high ethical standards. Where there is professional regulation, again, it is anticipated that the maintenance of high ethical standards is incorporated.
- The ability to enforce such laws is not equal. For example, the costs of litigation are often higher than can be met by all but the wealthiest individual. Alternative forms of dispute resolution are also expensive and not well advertised. If the legal framework exists but is ineffective, passing new laws will have little impact.
- The Law Commission has recently identified a number of areas involving technologies which require consideration for change. The Bar Council has contributed to its consultation on the areas which would benefit from such consideration: https://www.barcouncil.org.uk/uploads/assets/cf8c9623-7920-4c7c-9126b297a697dbfd/Bar-Council-response-to-Law-Commission-consultation-on-14th-Programme-of-Law-Reform.pdf.
Question 8: How can transparency be ensured when it comes to the use of these technologies, including regarding how they are purchased, how their results are interpreted, and in what ways they are used?
- The Bar Council is not in a position to advise on procurement and interpretation of data, as such matters fall outside its available expertise.
Question 9: Are there relevant examples of good practices and lessons learnt from other fields or jurisdictions which should be considered?
- There are lessons that could be learned from different areas and jurisdictions. For example, in the Fintech space, some of the Kalifa Review’s recommendations could be adopted for legal technology: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/978396/KalifaReviewofUKFintech01.pdf
- Other jurisdictions have been considering these issues for some time and have been active in implanting new technologies, as was apparent from the two-day conference organised by HMCTS in December 2018 – the international forum on online courts. Examples include Canada (https://www.justice.gc.ca/eng/rp-pr/jr/jt2-tmj2/jt2-tmj2.pdf), Singapore (https://www.supremecourt.gov.sg/services/visitor-services/court-facilities/technology), India and China.
- The SHERPA PROJECT Recommendations (see paragraph 9, above).
- The Alan Turning Institutes’ guidance on ‘Understanding artificial intelligence ethics and safety”: https://www.turing.ac.uk/research/publications/understanding-artificial-intelligence-ethics-and-safety
Question 10: This Committee aims to establish some guiding principles for the use of technologies in the application of the law. What principles would you recommend?
- The technology should be used to give effect to the aims in paragraph 5, above. The technology should also be:
- transparent and trustable and ethically designed so as not to deny individual autonomy, recourse and legitimate rights
- flexible and adaptable and avoid unlawful discrimination and biases
- secure and consider privacy by design
- The processes of designing and introducing new technology should fully engage with multidisciplinary stakeholders using effective consultation.
- Any technology, once implemented, should be closely monitored, continually assessed, and adapted at need, after consultation with multidisciplinary stakeholders.
1 October 2021
7