News

FCA, Federal Reserve Share Concerns About AI in Financial Services

Cooley alert
August 3, 2023

Senior regulators at the Financial Conduct Authority (FCA) in the UK and at the Board of Governors of the Federal Reserve System (Federal Reserve Board) in the US publicly shared remarks regarding their concerns about artificial intelligence. Following a 17 July 2023 joint statement by US Consumer Financial Protection Bureau (CFPB) Director Rohit Chopra and Commissioner for Justice and Consumer Protection of the European Commission (EC) Didier Reynders, this continued messaging demonstrates that regulators and enforcement authorities remain committed to a collaborative approach for addressing consumer protection risks that are shared globally.

Key issues highlighted by the FCA

In a speech from London, Nikhil Rathi, chief executive of the FCA, expressed concerns about the use of AI and the role Big Tech companies play in gatekeeping financial data. In particular, Rathi’s comments focused on the risks that Big Tech may pose operationally for payments, retail services and financial infrastructure, as well as risks related to impacting consumer behavioral biases. Rathi also acknowledged the potential benefits that partnerships with Big Tech could offer, noting specifically the possibility of increasing competition and innovation. However, Rathi expressed caution about the role of ‘critical third parties’ and their access to comprehensive data sets, such as browsing data, biometrics and social media. For example, Rathi noted that two-thirds of UK firms use the same few cloud service providers and stated that the FCA, alongside the Bank of England and Prudential Regulation Authority, plans to regulate such critical third parties to ensure their security and resilience.

Risks created by Big Tech offering financial services

Rathi also discussed benefits and trade-offs that he believes the use of AI introduces into the markets – including risks affecting integrity, pricing, transparency and fairness of the financial markets. Rathi addressed the rapid increase in intraday volatility in trading across markets, especially since the 2008 financial crisis and particularly in the context of fraud, cyberattacks and identity fraud. These risks are increasing in sophistication and effectiveness across the globe. While certain AI solutions may help prevent these risks, some fear that AI could lead to more significant problems. In addition, Rathi noted that the explainability of AI models is a high-priority item for UK regulators, with a particular emphasis on potential problems involving data bias. Nevertheless, Rathi also highlighted other benefits of using AI in financial services – including improving financial models, providing more accurate financial information and advice, offering personalized products and services to customers, and tackling fraud and money laundering more effectively.

New regulations and resources for financial services

In his closing remarks, Rathi called for a globally coordinated approach to regulating AI that fosters innovation while maintaining trust and confidence in financial services. Rathi highlighted that the FCA established a “Digital Sandbox” earlier this summer that allows fintech companies and other innovators to test new technologies using real transactions, social media and other synthetic data on the platform. Due in part to this outcome-based approach, frameworks have emerged to address issues that accompany AI technology. The Consumer Duty, effective 31 July 2023, stipulates that firms must design products and services that aim to secure good consumer outcomes throughout all stages of the supply chain. The Senior Managers and Certification Regime (SM&CR) likewise makes clear that senior managers are the ones ultimately accountable for the activities of a firm, including activities related to AI. There also have been recent suggestions from the UK Parliament that there should be a bespoke SM&CR-type program for the most-senior individuals managing AI systems – particularly individuals who may not have typically performed roles subject to regulatory scrutiny but who will now be increasingly central to firms’ decision-making and the safety of markets.

US regulatory issues related to AI and mortgage underwriting

Across the pond, speaking at the National Fair Housing Alliance’s national conference in Washington, DC, Vice Chair of the Federal Reserve Board Michael Barr expressed similar concerns about the use of AI in mortgage origination and underwriting. Barr noted that advancements in mortgage origination and underwriting technology could lead to discriminatory practices and violations of legislation, such as the Fair Housing Act and Equal Credit Opportunity Act. Barr indicated concern that AI programs could ‘perpetuate or even amplify’ certain biases by drawing from data that is flawed or incomplete to reach inaccurate conclusions about prospective borrowers based on their protected characteristics. Although Barr mentioned the potential for new AI technologies to expand credit opportunities to underrepresented groups, he stated that inadequate technology could result in reverse redlining risk, where underrepresented borrowers are steered toward more expensive or lower-quality financial products. Barr’s comments align with similar concerns expressed by other regulators in DC, including Chopra, who has been critical of the risks in AI-based underwriting and customer engagement.

What’s next?

These comments by top regulators in the UK and US demonstrate a continued commitment to international collaboration amongst agencies to address perceived consumer protection risks, including those in the rapidly evolving regulatory landscape for financial services companies that use AI, raising the bar – and the risk – for those entities that are using such technology. As regulators continue to focus on these sorts of legal and compliance risks, it will be important for companies using AI to do the same, including understanding the potential for bias and other fair lending risks within their models and considering resources like digital sandboxes to test and train their models.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.