Written evidence submitted by Dr Charmele Ayadurai, Assistant Professor Economics and Finance, Durham University, United Kingdom and Dr Sina Joneidy, Lecturer in Digital Enterprise,Teesside University, United Kingdom (TFP0003)



Banking on Artificial Intelligence





UK being the 21st century financial hub of the world is challenged to adopt AI (Artificial Intelligence) but at the same time is challenged by it. If banks do not adopt AI, banks will become dinosaurs and soon extinct. Yet, if banks embrace AI, banks need to promise no harm will come out of it. This piece is written to create awareness and give some food for thought for all stakeholders involved from the developers of AI, the end users to the general public. A responsible AI requires commitment from all parties including the society to question, debate, and synthesis on the use of AI. Cohesively helping to building a system that is not only responsible and trustworthy but a system that benefits all.


Do banks need AI?


The 21st century bank needs AI (Artificial Intelligence) to survive. To foster competitiveness in the financial industry (competition increases the quality of service offered while lowering the cost) the UK government lowered entry barriers to the financial sector. This gave admissions to technology-based companies to offer financial products (Fintech) to enter the market. Banks recorded sluggish profits from thereon. A clear indication of banks’ inability to compete with its new peers.


The 21st century customers demand 21st century characteristics and qualities in a bank. Generation Z, a generation that tunes twenty-four hours, seven days a week to technology call for flexible, fast, and varied services at the tip of their fingers in seconds. A tall order to reach for non-digital banks.


Banks also work with large volumes of data. Quick service turnaround calls for speed and accuracy in processing data as well as making prompt and timely decisions. It is humanly impossible to meet this turnaround as staff do make mistakes and abstain from work from time to time due to leave and sickness.


On average banks handle five million transactions per day, work with five million individual accounts, over three million customers with hundreds of product types (Ayadurai & Joneidy, 2021a). Banks are quickly realising that not only customer numbers are growing, but customers’ needs are increasing too. As financial products are complex in nature and are becoming more difficult to understand, more time and effort is needed to ensure that each and every customer’s queries and concerns are addressed and answered well without additional charges.


How can AI help banks?


AI can attend to customer’s needs twenty-four hours, seven days a week building the ideal 21st century bank desired by customers (Lui & lamb, 2018; Fernández, 2019; Hakala, 2019; Rahim et al., 2018; Ayadurai & Joneidy, 2021a). AI offers quick, flexible, personalised, cost effective and a variety set of services for a wide range of customers whilst ensuring better and quality services are offered in a structured, systematic, and timely manner from start to finish raising customer’s satisfaction (Lui & lamb, 2018; Fernández, 2019; Jagtiani, Vermilyea & Wall, 2018; Krovvidy, 2008; Gates, Perry & Zorn, 2002; Ayadurai & Joneidy, 2021a).

AI helps banks to process large datasets from customers’ personal details, loan history to collaterals at great speed and with accuracy (Lui & Lamb, 2018; Fernández, 2019; Sachan et al., 2020; Wall, 2018; Eletter, Yaseen & Elrefae, 2010) without compromising on important procedures such as 5Cs (character, capital, collateral, capacity and economic conditions) assessments in making the right loan decisions (Ayadurai & Joneidy, 2021a). Besides, AI ensures that customer’s lifetime data is updated in real time, collected, processed, categorised, and stored in an orderly manner.


AI possesses many qualities that an ideal workforce would represent. It is able to work non-stop without complains, with less supervision and without getting distracted. It can patiently work through repetitive, boring and emotionally exhausting task, servicing larger number of customers around the clock while making no mistakes, maintaining precise and impeccable execution every time (Ayadurai & Joneidy, 2021a).


AI is extremely intelligent as it is constantly learning through the cloud. AI can work with structured and unstructured data, linear as well as nonlinear relationships, make sense and solve real world scenarios using advance data modelling techniques (Ghodselahi & Amirmadhi, 2011; Šušteršič, Mramor & Zupan, 2009). As such, AI offers meaningful insights and accurate predictions, to make sound overall decisions.


AI applies real time crime prevention measures. Validity and reliability of data is assessed as soon as it is received. AI also scrutinizes customers behaviors closely to detect fraud, money laundering and other forms of cybercrimes. AI is known to completely clear out cyber-attacks (Yampolskiy & Spellchecker, 2016). Thus, being the ideal market surveillance system for a bank.


Banks that have actively adopted AI show better profit margins while offering more affordable services to customers (Kaya, Schildbach & Schneider, 2019).



What are the challenges posed by AI?


AI’s ability to provide precise predictions and insightful decisions depends largely on the quality, quantity and diversity of the dataset AI is working with (Wall, 2018). As banks have a duty of care to provide ethical services, bind by law i.e., Fair Lending Act, banks need larger and more diverse datasets to work with in making the right and just decisions.


If not, algorithms will form bias opinions making unfair decisions i.e., turning down loans instigating more harm than good to the society. AI have linked financially vulnerable customers to mental health issues (Burkhardt et al., 2019) known to accept white applicants and reject black applicants in loan applications (Bostrom & Yudkowsky, 2014) has been criticized for discriminating against gender, race, profession, marital status and creed (Fernández, 2019; Cheatham, Javanmardian & Samandari, 2019; Nassar et al., 2020; Byrne, Shahidi & Rex, 2017; Petrasic et al., 2017). The European Union's new General Data Protection Regulation (GDPR) allows customers to demand for an explanation and challenge the banks if their loans are refused by AI.


Banks could also suffer from major reputation risk if AI makes severe mistakes such as the Flash Crash in 2010 caused by a trading algorithm (Kirilenko, Andrei & Andrew, 2013; Yampolskiy & Spellchecker, 2016) or Chatbot Tay tweeting prejudice remarks (Lui & Lamb, 2018; Crosman, 2018). If AI makes mistakes who should be held accountable for its actions? A whole team of developers, data scientist, OS providers, programmers, AI researchers amongst others have come together to form and help AI to function. As such, it is difficult to accurately pinpoint who will be liable for its actions.


AI is not able to explain what steps it follows to make a decision. Transparency is important to detect if mistakes has been made in data inputs, to fish out data poisoning, to address biasness, to work out channel attacks and internal network manipulations. Thus, correcting and perfecting the system. A lack of transparency makes it difficult to prove the authenticity of the decision made and thus, fully trust the system.


To develop, operate and maintain an AI can be costly (Hakala, 2019; Wong, 2015). As such, only a few large banks will remain reducing healthy competition in the market (Fernández, 2019). To sustain AI systems, banks need staff with dual speciality in finance, computer science, cybersecurity, cryptography, machine learning, computer forensics, psychology etc (Ayadurai & Joneidy, 2021b). It would be a challenging task to find suitable candidates to fulfil this role.


Customers prefer humans to machines in large fund transfers, as it is difficult to trust machines in these circumstances (Lui & Lamb, 2018; Cocca, 2016; Ludden, 2015; Hakala, 2019). AI excels in IQ (Intelligence Quotient) but not very much in matters concerning EQ (Emotional Quotient). As such, is unable to fully connect and comprehend human emotions and thus, impart empathy, kindness, and compassion to customers.


How do we move from here?


Global AI investments show more than €6.5 billion in Asia, more than €12 billion in US and less than €3.5 billion in Europe (Fernández, 2019) showcasing that UK has an opportunity to make London the twin Financial and AI hub of Europe.


To realize this vision, a responsible AI needs to be built. A responsible AI is sound from all aspects as it places human as the center focus in its design and operation. To bake a scrumptious and moist lemon cake, one would need good quality ingredients, follow a well-tested recipe as well as have some expertise in baking. In the same way an AI needs quality and diverse data inputs (ingredients) work with the right algorithm (recipe) executed by a team of well-paid specialist in the field (bakers).


AI systems that explain its steps and processes explicitly will garner more hope and trust in the system as each step and process can be tracked back, cross checked and verified against data manipulations, biasness, errors. This will give further assurance that the final decision was made on fair grounds. An open-sourced system with human present at every layer to carry out checks and balances is also a vital feature for a just system as it ensures ethical and moral values are followed through.


Responsible AI is only possible through sufficient commitment from all stakeholders and society. Everyone needs to be educated on AI and its applications. This will then help an individual to think through the daily opportunities and challenges posed by the system in one’s life before discussing and debating a way forward with researchers and developers, cohesively building a system that is responsible to all. A system that only bringing benefits with “no harm”.




Ayadurai, C., & Joneidy, S. (2021a). Artificial Intelligence and Bank Soundness: A Done Deal?-Part 1. intechopen.


Ayadurai, C., & Joneidy, S. (2021b). Between the Devil and the Deep Blue Sea-Part 2.intechopen.


Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence, 1, 316-334.


Byrne, M. F., Shahidi, N., & Rex, D. K. (2017). Will computer-aided detection and diagnosis revolutionize colonoscopy?. Gastroenterology, 153(6), 1460-1464.

Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, 1-9.


Cocca, T. (2016). Potential and Limitations of Virtual Advice in Wealth Management. Journal of Financial Transformation, 44(December), 45-57. Retrieved from https://ideas.repec. org/a/ris/jofitr/1581.html


Crosman, P. (2018). How Artificial Intelligence is reshaping jobs in banking. American Banker, 183(88), 1.


Eletter, S. F., Yaseen, S. G., & Elrefae, G. A. (2010). Neuro-based artificial intelligence model for loan decisions. American Journal of Economics and Business Administration, 2(1), 27.


Fernández, A. (2019). Artificial intelligence in financial services. Banco de Espana Article, 3, 19.


Ghodselahi, A., & Amirmadhi, A. (2011). Application of artificial intelligence techniques for credit risk evaluation. International Journal of Modeling and Optimization, 1(3), 243.


Hakala, K. (2019). Robo-advisors as a form of artificial intelligence in private customers’ investment advisory services.


Jagtiani, J., Vermilyea, T., & Wall, L. D. (2018). The roles of big data and machine learning in bank supervision. Forthcoming, Banking Perspectives.


Kaya, O., Schildbach, J., AG, D. B., & Schneider, S. (2019). Artificial intelligence in banking. Artificial intelligence.


Kirilenko, Andrei A., and Andrew W. Lo. (2013). Moore's Law versus Murphy's Law: Algorithmic Trading and Its Discontents." Journal of Economic Perspectives 27, no. 2: 51-72.

Ludden, C. (2015). The Rise of Robo-Advice: Changing the Concept of Wealth Management. Accenture, 12. https://doi. org/10.1007/978-3-319-54472-4_67

Lui, A., & Lamb, G. W. (2018). Artificial intelligence and augmented intelligence collaboration: regaining trust and confidence in the financial sector. Information & Communications Technology Law, 27(3), 267-283.


Nassar, M., Salah, K., ur Rehman, M. H., & Svetinovic, D. (2020). Blockchain for explainable and trustworthy artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(1), e1340.

Petrasic, K., Saul, B., Greig, J., Bornfreund, M., & Lamberth, K. (2017). Algorithms and bias: What lenders need to know. White & Case.


Sachan, S., Yang, J. B., Xu, D. L., Benavides, D. E., & Li, Y. (2020). An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144, 113100.


Šušteršič, M., Mramor, D., & Zupan, J. (2009). Consumer credit scoring models with limited data. Expert Systems with Applications, 36(3), 4736-4744.


Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business, 100, 55-63.


Wong, M. M. (2015). Hungry Robo-Advisors Are Eyeing Wealth Management Assets We Believe Wealth Management Moats Can Repel the Fiber-Clad Legion. 16. Retrieved from https://www.morningstar.com/content/ dam/marketing/shared/pdfs/Research/e quityreserach/20150409_Hungry_ RoboAdvisors_Are_Eyeing_Wealth_ Manage ment_.pdf


Yampolskiy, R. V., & Spellchecker, M. S. (2016). Artificial intelligence safety and cybersecurity: A timeline of AI failures.





May 2021