Written evidence submitted by Professor Suresh Renukappa; Ryan Bhuttay; Chandrashekar Subbarao; Professor Subashini Suresh; Professor Prashant Pillai; Professor Manjunath Aradhya; Professor Tonny Veenith
(UAI0022)
Executive Summary:
- Artificial Intelligence (AI) is crucial to realising the full promise of innovative governance, from managing natural resources to improving governance and economic growth to empowering citizens.
- Governments have found it more difficult to provide services to communities, particularly in the face of austerity, evolving government policies that affect local obligations, shifting demographics, climate emergencies, and a variety of citizen needs.
- Local governments in the UK are investigating the use of AI to help their frontline employees provide services more effectively and efficiently. This is being explored to totally or partially automate operations. Although there have been several successful initiatives using chatbots for citizen interactions, predictive analytics for decision assistance, and back-office automation, little is known about the real-world difficulties local governments encounter in implementing these programmes in the UK.
Written evidence:
- By providing creative ways to improve efficiency, transparency, and citizen participation, the use of AI in governance is revolutionising the field of public sector administration. For instance, urban administrators are investigating the use of cutting-edge information technology, such as AI, to provide governance services, especially in large cities across the globe. The main goal is to use AI’s capabilities to give citizens quick access to pertinent information without requiring high technological know-how. AI’s speech, text, and image processing abilities are mimicking those of humans.
- The deep neural network mimics the human brain to process data and then output correct data. Frameworks like the unified theory of acceptance and use of technology suggest that these frameworks are intended to assist public sector administrators in negotiating the challenges associated with incorporating AI-driven solutions into practice.
- Making AI-based essential services in innovative governance is paramount but should be socially conscious and available for the benefit of citizens. For instance, AI algorithms affect how people view cities concerning urban governance and management. A study was conducted in Poland to investigate how algorithmic governance affected citizens’ acceptance of algorithms and their perceptions (Olesky et al., 2023). The experimental study’s findings demonstrated better acceptance when algorithms were presented as collaborating with people rather than acting alone to make judgments.
- Unlike in a city, where algorithms work with human municipal officials, a town using unsupervised algorithms for urban governance may be seen as hostile and unmanageable. However, some disadvantages are associated with using innovative technologies for governance. For instance, the lack of interpersonal communication is the primary drawback of innovative governance. Many individuals believe that interpersonal communication is an essential component of communication. Furthermore, a sizable amount of the population lacks basic computer skills and is digital illiterate. They find it extremely challenging to access and comprehend innovative governance. In addition, the possibility of citizen’s private information being taken from government services is always present. Cybercrime is a severe problem since a data breach can cause the public to lose faith in the government’s capacity to rule.
- A focused analysis of the body of research on AI and national security was included in the Government Communications Headquarters (GCHQ) report (NCSC,2024). According to this report, AI presents the UK national security community with many chances to enhance the effectiveness of current procedures. For example, heterogeneous information can yield insights quickly due to AI techniques, which can spot links that human operators might miss otherwise. However, deploying AI may raise new privacy and human rights issues that need to be evaluated within the current legal and regulatory framework in light of national security and the authority granted to UK intelligence agencies. Stronger policies and guidelines are required to guarantee that privacy and human rights consequences of national security applications of AI are continuously examined as new data analysis techniques are implemented.
- Functioning within a well-regarded legal and regulatory framework that balances safeguarding citizens' way of life from risks and preserving fundamental individual rights and liberties. Taking responsibilities regarding human rights and privacy extremely seriously, evaluating to ascertain the necessity and proportionality of any privacy incursion, both when contemplating using operational data to train and test AI software, to guarantee that the impact on privacy must be appropriately considered in every situation. AI will be essential to everyone’s capacity to handle the ever-growing amount and complexity of data and to build defences against malevolent peoples’ AI-enabled threats. As AI develops and humans discover new applications, the governance framework must expand to guarantee that its usage stays ethical. Therefore, the need to keep interacting and learning to make sure recommendations consider the most recent research is essential.
- Integrating AI into the nationwide public sector management system is complex and can present several challenges, particularly concerning output data accuracy and statistical reliability. One primary concern, for instance, would be the generation of inconclusive, misguided findings, which could lead to improper decisions. AI models may occasionally produce ambiguous results, which leave room for misinterpretation by public sector decision-makers. Such instances are often the product of AI models being fed biased data and perhaps even being too simple or complex for the type of data processing.
- Another critical concern with the automation of public sector services is AI stripping away the element of the human touch from the delivery of public sector services, which is crucial in building the required trust for the citizens. As a result, citizens may feel that they are interacting with a robot, not a public sector professional. Such dynamics can play a pivotal role in influencing citizen experience as AI lacks the essential features of empathy and emotional compassion that are significant in citizen care. The implementation of AI is also associated with several trust and financial challenges. Developing and tailoring AI systems for the public sector is expensive and extraneous. The integration of such technology demands the fixing of clear goals and objectives in line with the vision of the public sector system. Poor implementation of AI systems can reduce public trust in the delivery of public services.
- ‘Blackbox Artificial Intelligence’ system is a term used to refer to AI models that are embedded with complex algorithms to the likes of deep learning. Such models operate by processing troves of data and employ several complex algorithms in their analysis; this makes them extremely difficult to debug for core output-related issues that may surface when requiring an explanation for their provided analysis. Such models are often characterised as ‘opaque’, wherein their decision-making is not transparent to the system operators. The general appeal of integrating AI systems into the public sector lies in the realisation that decision-makers in the public sector benefit from its application through the model’s ability to demonstrate swift and accurate decision-making in complex environments. However, with their opaque nature, public sector decision-makers may hesitate to trust the information provided by the systems without understanding the underlying process route it took to achieve its solution. Furthermore, this may make accountability challenging in scenarios where the AI systems make mistakes.
- Several forthcoming concerns have been raised regarding the ethical considerations of implementing AI systems within the public sector. One of the foremost concerns is regarding the practice of data sharing. Data sharing and privacy practices are gaining notoriety for becoming a point of dispute in implementing AI systems in government. Long-term system upgrades demand collaboration with private service providers and sharing sensitive citizen data. Citizen data at this stage is often used to train the newly developed systems to provide output accurate data analysis. However, this is becoming a concern amongst citizens as they worry about commercialising their data. In this regard, consent is a critical element in planning how AI systems are to be implemented. Having citizens informed on how their data will be processed and used for analysis will help develop dynamic AI models according to societal preferences.
January 2025