FCA CEO Warns Of AI Disruption To Financial Services Sector

Head of financial regulator warns of threat posed by AI for online fraud and cyber risk, as UK seeks leadership for AI rules

The serious threat posed by artificial intelligence (AI) to the financial services sector has been highlighted by the head of the Financial Conduct Authority (FCA) this week.

Speaking to industry leaders on Wednesday, the head of the UK financial regulator, Nikhil Rathi warned about the rising threat posed by AI, centred around cyber risk and online fraud.

Rathi said that artificial intelligence could disrupt the financial services sector in “ways and at a scale not seen before”. And he warned that the regulator would be forced to take action against AI-based fraud.

Financial Conduct Authority chief executive Nikhil Rathi.
Image credit FCA

Financial sector

According to Nikhil Rathi there are risks of “cyber fraud, cyber-attacks and identity fraud increasing in scale and sophistication and effectiveness” as AI becomes more widespread.

His comments come after UK Prime Minister Rishi Sunak met with the CEOs of Anthropic, OpenAI and Google DeepMind in Downing Street in May this year.

The PM then said the UK is seeking to be the “geographical home” of coordinated international efforts to regulate artificial intelligence, and the UK will host an international summit on the risks and regulation of AI later this year.

But the FCA’s Rathi in his speech warned that AI technology will increase risks for financial firms in particular.

Senior managers at those firms will be “ultimately accountable for the activities of the firm”, including decisions taken by AI, he said.

“As AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously.” he said “We will take a robust line on this – full support for beneficial innovation alongside proportionate protections.”

Rathi used the example of a recent “deepfake” video of the personal finance campaigner Martin Lewis supposedly selling speculative investments.

Lewis said the video was “terrifying” and called for regulators to force big technology companies to take action to stop similar scams.

A number of tech platforms have begun clearly labelling content generated by AI tools, as governments around the world prepare AI regulatory frameworks.

Ever since September 2020, Microsoft has offered a detection tool that can identify deepfake photos and videos in an effort to combat disinformation.

Image may be subject to copyright

Regulatory minefield

“AI is set to become a regulatory minefield for the FCA, so maintaining a clear line of communication with businesses about the challenges and opportunities ahead is critical to maintain high standards within the market,” said cyber specialist Suid Adeyanju, CEO of cybersecurity specialist RiverSafe.

“The tidal wave of AI-enabled cyber attacks and online scams adds a greater level of complexity, so it’s vital that financial services firms beef up their cyber credentials and capabilities to identify and neutralise these threats before they can get a foothold,” added Adeyanju.

Fraud detection

Meanwhile Chris Downie, CEO of fraud detection platform Pasabi, noted the need for the financial services sector to bolster their ability to detect AI driven fraud and scams.

“It’s encouraging that the FCA is recognising the need for firms to rapidly ramp up fraud prevention measures to meet the challenge of AI driven scams and cyber fraud,” said Downie. “Cyber criminals and fraudsters are wasting no time in hijacking the technology to create realistic online scams at scale and right now they are winning.”

“To reverse this trend, a collaborative approach between the FCA, businesses and fraud software providers will be key to harnessing the latest fraud detection technologies, to start restoring confidence in the financial services market,” Downie concluded.