AIFS0005
Written evidence submitted by Finexos
Finexos – AI based credit decision software for firms offering credit.
Our reason for submitting evidence: Finexos started with the ambition to make access to credit fairer for consumers disadvantaged by traditional credit score models which allow bias to inform decisions. Over time, our approach has been refined to show that firms offering credit can do so safely, with a reduced probability of default, without using Personal Identifiable Information. We have recently developed our Sustainable Affordability Assessment service, the aim is to bridge the gap between the credit score and the point at which a consumer or SME falls into the vulnerable circumstances category. We are testing this in market.
AI in Financial Services
How is AI currently used in different sectors of financial services and how is this likely to change over the next ten years?
There is suggested adoption of AI where there is a need for either speed or volume of data processing. Often, it is the basic need for machine learning models that drive adoption and then onwards development of AI. A significant amount of ‘noise’ in financial services regarding AI is really referring to ML, unless it is in those use cases where AI chatbots have been trained on all the products, policies and services of a bank for example, to be the first point of internal queries or external customer service. Agentic AI is likely to be one area of significant growth in the next ten years as it is relatively straightforward to train and can learn rapidly with the nature of queries received from customers.
Fintech firms often use new technology earlier as there are fewer internal barriers to adoption and because speed to market with your service is crucial. A move towards deep learning solutions to proactively identify anomalous or suspicious events is already being used in financial services and supporting fraud detection in particular. The sophistication of what can be achieved in payments and fraud more broadly is likely to grow rapidly.
To what extent can AI improve productivity in financial services?
In the area of financial services my company serves, we have built ML models to extract greater accuracy from credit bureau data. We are using AI to take that to the next level in understanding how customers and SMEs may behave based on an understanding of their behaviour. This means that we can create a faster path to ‘yes’ in loan decisioning than would otherwise have been possible.
Generative AI has the potential to transform consumer engagement from financial services brands, developing more appealing content tailored to specific audiences at a speed that a human content creator couldn’t match. Taking that to the next stage, if a bank or lender observed that they had a lot of negative PR they could use the inputs to train GenAI to create better products for that sector. Seems unlikely currently but it’s a space to watch.
Explainability and transparency are the hurdles for adoption. Simon Taylor, in his newsletter, Fintech Brainfood, refers to this as The Compliance and Explainability Paradox. “the most dangerous AI isn’t the black box that works mysteriously well. It’s the glass box that works mysteriously poorly.”
The fear that AI cannot be effectively challenged, unlike for example a human’s decision on a loan where you can ask the person why, means that often an AI model that has clear explainability, it held to a far higher level of scrutiny or even distrust. This creates a reason to say no to the use of AI rather than a safety first yes.
The nature of work will change as AI, and GenAI, are adopted in the coming years. There will be a change in the what roles companies require, but creativity is an inherently human characteristic so whilst there may be a contraction in roles in the near term, and a change in those roles, new possibilities will emerge.
What are the benefits and risks to consumers arising from AI, particularly for vulnerable consumers?
The automation of document processing is a good example of both a benefit and risk, specifically in an industry such as insurance. The complexity of the documentation, the time-consuming nature of checking for accuracy and the sheer volume of data required all seem a perfect fit for an AI model. However, whilst it will improve workflow and time to serve, there is a significant risk that claims are rejected by an AI model or that premiums are increased to an uneconomical level based on the predictive risks the model can assess.
In the consumer credit market, the benefit of personalised credit offerings for example, where there is a hyper-personalisation based on the individuals transaction history & behavioural data, leads to both over-extension of credit for the individual and less favourable pricing. A lender could dynamically adjust the price they’re prepared to lend at using real-time analysis of that individual’s data, a version or surge pricing we see in Uber or Ticketmaster. Neither of those companies have won the hearts of consumers with that model and it would be very concerning to see that come into the financial services industry. It would potentially make credit even more expensive for those at the lower end of the credit score profile or those with thin files.
February 2025