Wall Street is worried it can't keep up with AI-powered cybercriminals

Banks spend millions on cybersecurity every year. But execs think that's not enough to fight against the threat of thieves armed with generative AI.

Wall Street is worried it can't keep up with AI-powered cybercriminals
a hacker coding at a computer.
Armed with generative AI, bad actors can use the technology's ability to mimic humans to perform more sophisticated and realistic scams.
  • Consultant Accenture surveyed bank execs on the impact of generative AI on cybersecurity.
  • Eighty percent of respondents said AI is enabling hackers faster than banks can keep up.
  • Accenture's security expert outlines why banks are hampered and what's at stake.

Generative AI could be one of the most promising tech advancements on Wall Street — but it may also turn out to be one of the most threatening.

Bank leaders feel like they can't protect against what cybercriminals can do with generative AI, according to fresh data from Accenture based on a survey of 600 bank cybersecurity executives. Eighty percent of respondents believe generative AI is empowering hackers faster than their banks can respond.

Cybersecurity is an important component of customer trust, Valerie Abend, Accenture's financial services cybersecurity lead, told Business Insider.

"The banks that really understand how important customer trust is, as their most valuable asset, and put cybersecurity right there as the core of the enabling that, those are going to be your winners," she added.

Banks have touted generative AI's ability to make their workers more productive and efficient. The tech is being used to do everything from helping software developers write code to enabling analysts to summarize thousands of documents into research reports. But it's not just Wall Street workers who are using the tech to their advantage.

Armed with generative AI, bad actors can ingest more data than before, and use the technology's ability to mimic humans to perform more sophisticated and realistic scams.

These attacks, which are targeting customers, bank employees, and their technology providers, can have far-reaching consequences. Once criminals gain access, they can make fraudulent purchases, wire money, and drain customer accounts of funds. They can also gain deeper access into company tech stacks, steal data, and download malicious software.

Bank leaders are not ignorant of what's at stake; JPMorgan has said it spends more than $600 million each year on cybersecurity, while Bank of America's cyber spend has surpassed $1 billion annually. Some key tech execs, like Goldman Sachs's Stephanie Cohen and Bridgewater's CTO Igor Tsyganskiy, have even left the finance industry altogether to tackle the cyber threat of AI more directly at tech companies.

But despite the millions of dollars banks spend to shore up their defenses, many IT execs believe the advancement of generative AI is too quick to keep up with. About a third of survey respondents (36%) said they believe they have a solid grasp of the rapidly evolving cybersecurity landscape.

To be sure, banks are using AI to detect vulnerabilities, offer up more robust threat intelligence reports, and try to get ahead of attacks by analyzing more real-time data, Abend said. They've also been using AI to identify so-called toxic combinations, like employees who have access to approve and execute transactions, including wire requests. But those efforts and the speed at which they can be deployed are greatly hampered by the strict regulations banks must follow, Abend said.

Abend, who spent years working at regulators like the Office of the Comptroller of Currency and the US Department of Treasury, said in order to use AI banks need to demonstrate that they can maintain the controls and governance necessary to stay within their risk appetite. They have to be thoughtful about how they adopt AI, the large language models they use, how third parties provide these models, how they're protecting the data that feeds the models, and who has access to the output of those models.

Cybercriminals are taking advantage of newer models, such as DeepSeek, to write malicious code and identify weaknesses, like identifying weak spots in the cloud security of a given IP address, Abend said. Established generative AI providers, like ChatGPT and Google Cloud, have blocked such activity, but newer models are still susceptible.

Third-party provider risk

There are fintechs and startups that are developing AI-powered tools to help banks thwart cyber attacks. Alloy, which works with M&T Bank and Navy Federal Credit Union, this week released a new product to detect attacks, identify suspicious volume spikes in applications, and reduce manual reviews during attacks.

But bank vendors and technology providers could provide another opening to banks that bad actors are targeting. Over 70% of breaches at banks come from their supply chain of vendors, Abend said. Cybercriminals use generative AI models to sift through data to find out which companies partner with banks and exploit that vulnerability. Tech providers aren't held to the same regulatory standards as banks, and banks' third-party oversight management is often manual and with limited and old data, Abend said.

"The reality is, you can outsource the capability as a bank, you don't outsource the risk," Abend said. "Customer trust is basically dependent on the bank to protect that customer's data and their financial information across the end-to-end supply chain."

Accenture research has found that maintaining customer trust helps banks achieve 1.5 times higher customer retention rates and 2.3 times faster revenue growth.

"This is not a back-office issue, banking executives really need to stop treating this like a compliance problem," she said.

Read the original article on Business Insider