Share

Old rules for the new age of artificial intelligence (AI) in financial services 

  • The Financial Conduct Authority (FCA), Bank of England (the Bank) and Prudential Regulation Authority (PRA) are expected to publish their long-awaited joint AI and ML feedback statement before the UK Government’s AI Safety Summit on 1-2 November.  
  • Companies should prepare to engage with the regulators on how their use of AI and the outcomes the technology delivers, align with the regulator’s principles and overall objectives.  
  • Recent FCA speeches indicate a preference for an outcomes-based approach to AI regulation, using existing regulatory frameworks, including the Consumer Duty and the Senior Managers & Certification Regime (SM&CR). 
  • We expect the upcoming critical third-party (CTP) regime to focus more heavily on AI than previously envisioned and be broad in scope. It will bring systemically important providers, e.g., some big tech firms into the regulatory perimeter, creating new standards and regulatory stakeholders for non-financial services firms to engage with. 
  • AI and ML regulatory issues are cross-sector and cross-border, so UK financial regulators will collaborate with other domestic regulators and international counterparts. 

Roadmap for AI financial regulation 

The coming weeks will see UK policymakers set the direction of travel for regulating artificial intelligence (AI) and machine learning (ML) in financial services. 

In October 2022, the FCA, the Bank, and the PRA launched exploratory work, encouraging a discussion on the use and regulation of AI and ML in financial services. The paper outlined the merits of defining AI and discussed the drivers of risks across data, models and governance. It also explored the potential benefits and risks and how policy can support further safe and responsible use.  

This joint paper was published before ChatGPT hit the mainstream in November 2022. Political interest in AI has heightened following the growing use of large language models (LLMs). With an openly pro-innovation stance, the UK aims to be a leader in AI regulation with the Prime Minister advocating for the creation of an international AI Safety Institute. 

The UK Government is encouraging regulators to publish their stance on AI regulation to coincide with the UK’s AI Safety Summit on 1-2 November. As a result, we anticipate financial services regulators will establish a clearer position this month.  

The growth of AI will also influence the forthcoming CTP regime, part of the Financial Services and Markets Act 2023 allowing HMT to designate currently unregulated third-party providers as critical and therefore requiring Bank, PRA and/or FCA regulation. We expect the CTP regime to take a ‘technology neutral’ approach to help future-proof the regime, but certain technologies such as cloud and AI will be explicitly called out.  

The CTP regime will bring many new services into the regulatory perimeter, creating new standards to follow and regulatory stakeholders for non-financial services firms to engage with. The largest technology firms, who are also waiting for the FCA’s call for input on their role as ‘gatekeepers’, are likely to be affected in different ways reflecting the mix of their interests in AI, cloud, and FS markets. 

The regulators’ dilemma: rules or principles? 

Recent speeches from the FCA’s CEO and Chief Data, Information and Intelligence Officer suggest the FCA will take a principles-based approach to AI regulation, using existing regulatory frameworks rather than introducing new rules. We expect the upcoming feedback statement to align with the government’s March 2023 AI whitepaper, the principles in the Competition and Markets Authority’s (CMA’s) foundation models review and also signal further exploratory work following suggestions made by the FCA’s Consumer Panel in September 2023

A principles-based, outcomes-focused approach would be consistent with the FCA’s recent regulatory philosophy. Its 2022-25 strategy marked a notable shift in considering outcomes over outputs, and principles over rules, with a clear sense that the regulator plays a role in many cross-cutting issues rather than providing the solution alone. The FCA’s ‘shaping digital markets to achieve good outcomes’ (one of the 13 priorities in its strategy), of which AI is a central pillar, explicitly refers to the need to develop a joined-up regulatory approach that responds strategically and holistically to technological developments.   

AI and ML regulatory issues are complex, cross-sectoral and cross-border, requiring coordination among regulators domestically (e.g. with the CMA and other members of the Digital Regulatory Cooperation Forum) and internationally. With a new secondary objective focused on international competitiveness and growth, the FCA also now has a vital role in considering the impact on the future of the UK financial services markets. Most jurisdictions’ approach to AI regulation is too nascent to draw firm conclusions about the divergence of approaches, however, the EU’s AI Act takes a more static and prescriptive approach than we expect to see in the UK.  

We expect the FCA’s regulatory approach will initially be high-level, taking a pro-innovation, proportionate, pragmatic, future-proof and adaptable approach – which aligns broadly with the government’s current approach, though this may evolve over time. The FCA will want to demonstrate the Consumer Duty's effectiveness in tackling potential harms instead of creating new rules. However, further clarity from the regulator would help translate the Consumer Duty’s outcomes – such as ‘consumers receiving fair prices and quality’ – to the use of AI. 

How should companies respond?

We understand the FCA is diverting resources to focus on AI as it moves up the organisation’s list of priorities. We expect companies to feel greater scrutiny as a result. The financial regulators have already shown an interest in AI transparency and explainability, so this may become a particular area of focus. 

When structuring engagement with the financial regulators, regulated businesses and their suppliers should be looking to answer the following questions: 

1. How does their use of AI align with the FCA's principles and overarching outcomes? 

2. What benefits are they experiencing in using AI - and how do they spread beyond purely commercial (i.e. customers)?  

3. What are the oversight mechanisms for monitoring AI systems? What new processes have they developed? 

4. Can they easily explain how their use of AI works - and how confident are they that it’s not introducing bias? 

In addition to the attention of the financial regulators, companies must also consider the political approach to AI. Rishi Sunak is personally championing AI, making it one of his top priorities. This presents an opportunity for businesses to present themselves at the highest level of government ahead of the election next year. Labour is also focused on technology and AI but from the perspective of how it can be harnessed to improve people’s lives, especially through providing public services. 


This blog was written by Adam Hendry a Manager at Flint and Jaz Sansoye, a Director. Adam advises clients on regulatory, policy and economic issues with a focus on financial services and technology. Jaz was the FCA’s Head of Strategy where he led the regulator’s 3-year strategy, business planning and annual report.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2023 | Privacy Policy
Website by Saifee Creations
cross