AWS, Google and Microsoft are in an AI arms race. Banks are watching.

AWS, Google and Microsoft are in an AI arms race. Banks are watching.

The three big cloud computing vendors — Amazon Web Services, Google and Microsoft — have marshaled much of their forces around generative AI. Microsoft has invested $13 billion in OpenAI, creator of the massively popular ChatGPT generative AI search engine. Last month, AWS announced a $100 million investment in a generative AI innovation center. Google has invested an estimated $300 million in AI startups. All three offer a slew of proprietary technologies for developers, data scientists and lay people to create and use generative AI and large language models. 

At the AWS Summit this week in New York, for instance, speakers talked of nothing else.

“Generative AI has captured our imaginations for its ability to create images and videos, write stories and even generate code,” said Swami Sivasubramanian, AWS’s vice president of database, analytics and machine learning. “I believe it’ll transform every application, industry and business.” Though it’s been around for years, it’s reached a tipping point, he said.

Most banks work with these three vendors, yet unsurprisingly lag behind on the generative AI curve, due to the risks of errors and hallucinations in this advanced form of AI. They’re in test-and-learn mode, trying out different use cases, like improving chatbots and summarizing documents. Meanwhile, the generative AI craze seems to be spurring more interest in more traditional forms of AI, such as the use of machine learning in anti-money-laundering work.

Bankers are certainly asking their cloud vendors about generative AI.

“In almost every conversation that I’ve had over the last six months with a leader in any financial services organization, generative AI has come up as a topic,” said John Kain, head of financial services market development at AWS, in an interview. “Because in the financial services industry, our customers see how transformative this could be and none of them want to be left behind.”

Synchrony Financial and SouthState Bank are letting employees experiment with enterprise versions of Microsoft/OpenAI’s ChatGPT. 

“It’s game changing,” said Chris Nichols, director of capital markets at SouthState Bank in Winter Haven, Florida. “It’s worth all the hype.” His staff is using it to summarize email threads and find information. Synchrony has held internal hackathons to come up with the best uses for the technology. 

But most banks are proceeding cautiously. 

“Like most new technologies, you have to limit it for folks who may not fully understand the power and could do something unintentional,” said Carol Juel, chief technology officer and chief operating officer at Synchrony, in a recent interview. “So as a good steward and as a company, you have to protect against that.” 

Banks are right to take a slow, cautious approach to generative AI while technology vendors are betting their future on it, according to Sumeet Chabria, CEO of ThoughtLinks. 

“The current pace of AI investment in cloud and other technologies surpasses the ability of banks to adopt it responsibly,” Chabria said.  

On the other hand, banks may face increasing pressure from consumers who get more familiar with the technology as more products come bundled with generative AI, he said. Banks and technology vendors need to come together to discuss parity before it is too late.

See also  Automated weather insurance could offer help in an increasingly hot world

“This could mean technology vendors slow down a bit to fully comprehend the responsible banking concerns, including on cybersecurity,” Chabria said. “Banks on the other hand need to be willing to partner on low-risk, non-customer-facing use cases to help progress the technology and ensure the broader teams are trained on its potential and risks. There are use cases even today where generative AI may help mitigate risk in banking as an additional line of defense, like predicting the next big technology incident. Even a 1% probability of getting this right is a big deal.”

Where generative AI makes sense in financial services

In banking, traditional forms of AI, like machine learning and natural language processing, are used in many places: detecting fraud, monitoring cyber threats, chatting with customers, onboarding new customers, assessing potential borrowers and personalizing offers, to name a few.  

A large language model like GPT-4 or Titan brings great scale. It can analyze vast quantities of data and documents. Generative AI can generate text and code based on such massive datasets.

“What I think everyone’s realized is the power of a large language model to do many of those tasks,” Kain said. All AWS customers right now are finding out which use cases are best suited to generative AI and which work better with traditional AI, he said.

PennyMac and Black Knight, for instance, use traditional AI to extract data from mortgage documents. What customers are looking for is situations where a large language model would provide added benefit, he said.

JPMorgan Chase has been testing the use of generative AI for customer recommendations. Washington Federal and JPMorgan Chase have been exploring the use of generative AI for analyzing call center transcripts to figure out how to provide better prompts for customer service reps.

Document classification is another strong use case for generative AI, Kain said. Though companies can do this with traditional AI today, “you tend to have to give it a little bit more training material, a little bit more prompting to actually do that classification,” Kain said. 

Bill Borden, corporate vice president, financial services industry, at Microsoft, sees three top use cases for generative AI in banks. 

The first is content creation — for instance, generating proposals, reports and presentations, and summarizing internal meetings and customer conversations. HSBC India, for instance, is using the OpenAI GPT-3 davinci model to summarize regulatory briefs published by the Indian government.

The second is semantic search — using natural language and context to make searching smarter, faster and continuously trained.  

The third is code generation. 

“With copilot capabilities for generating sophisticated code, developers will spend less time writing lines of code and more time designing new statistical models and mathematical tools for actuarial challenges,” he said.

See also  Inside the minds of industry CFOs

Part of the appeal of generative AI to financial services clients is the idea that it could help reduce operating margins and change customer interactions, according to Yolande Piazza, vice president of financial services at Google. 

“Many controls are still manual today,” said Piazza, who was formerly CEO of Citi Fintech, in an interview. “How do you start to automate that so you can be much more predictive in your control functions and how you report out to the regulators? So I think people are able to clearly visualize the opportunity that will bring to the businesses.” 

Google offers an enterprise version of Bard, its ChatGPT-like search engine, to banks. It can be completely focused on a bank’s internal documents and data. It could also be set up to ingest certain external documents such as SEC filings. 

“[Customers] control the data sets, they control the models that they build,” Piazza said. “So there’s no risk of IP leakage. There’s no risk of them pulling in data sources that would give them competitors’ answers. If you just go out and train this on the world of the internet, you’re potentially bringing in competitors’ information.”

In its search results, Google Bard lists every source, to provide auditability.  

“If you want to go in and start reading in more detail, to validate the information, you have the ability to do so,” Piazza said. “You can control if this is just internal data, whether it’s internal plus external data. And that’s how a company will control its own destiny as far as accuracy, security and the distribution of models.”

No one in the financial services industry is going to adopt such technology blindly, Piazza noted. 

“What it will do initially is reduce the time to gather that information, that validation step and process,” she said. “Humans will stay in place for a long, long time. What we focused on is the research that nobody likes to do. Then a human can go through and say, what about this summary am I comfortable with? Where do I want to dig deeper?”

Generative AI is kick-starting interest in traditional AI

Piazza said the hype around generative AI is driving more interest among financial services clients in traditional forms of AI like machine learning. 

“Generative AI has forced people to go back and really look at the unlocked capability with AI and machine learning fundamentally,” said Piazza. “What you’ll find is they are all on a journey of AI, whether that’s models that they’ve built internally, whether that’s how they’re thinking about machine learning.” 

A case in point is HSBC, which recently co-developed AI-based anti-money-laundering software with Google.

The London bank operates in more than 60 countries and has more than 40 million customers. 

“We want to make sure that our products and services are not exploited by individuals who would use them for crime,” said Jennifer Calvery, group head of financial crime risk and compliance at HSBC. The bank reviews more than 1.2 billion transactions every month to look for signs of financial crime. Last year it filed more than 73,000 suspicious activity reports. 

See also  New tech features can cause headaches for buyers

Like other banks, HSBC files a report every time there’s reason to think someone has used its products and services to engage in a crime such as terrorist finance, money laundering, tax evasion, fraud, bribery or corruption. 

“Our job is to prevent them from doing that,” Calvery said. “And if they do get into our bank, to find them as fast as we can and to get them back out. So it’s a scale problem for us.”

She wanted to be able to use all the data the bank has at its disposal to understand the probability that any given customer or counterparty would use the bank to commit financial crime, in real time. 

“That was the dream,” Calvary said. “We had absolutely zero capability to do this. We were using the same rules-based systems that everyone in industry was using at the time. They are not real time, not capable of using all the data at our disposal. There’s thousands of people whose only job it is to close out noise because they generate so many false positives.”

It’s also difficult to identify financial crime by looking at individual bank transactions, said Calvery, who is a former prosecutor. 

“I did many investigations,” she said. “I never once tried to find a criminal by looking at transactions one at a time. That’s just not how you find criminals. So we wanted to invent something new.”

Google Cloud’s AML AI provides a machine learning-generated customer risk score based on bank data including transaction patterns, network behavior and know-your-customer data. This helps the bank identify its highest-risk customers. Other providers of machine learning-based anti-money-laundering software include IBM, Quantexa, Thetaray and ComplyAdvantage.

HSBC has been using the new anti-money-laundering software for a year in the U.K., Singapore, Mexico, the Channel Islands and Hong Kong.

“We’re finding more financial crime faster with far less noise and far less calls out to customers, asking them questions for what ultimately turned out to be a false positive,” Calvary said. 

Some may wonder if the hype around generative AI is a passing fad. Kain does not.

“You’ve already seen the quality of the output, from just a richness of the human interaction experience, that these language models can bring,” he said. “And that’s very tangible. There are definitely productivity benefits that you can see within that.”