AI, black boxes and bias: The impact of the White House's executive order
The White House’s executive order on artificial intelligence has created a lot of buzz in Washington this week. For banks that use AI, the order’s impact could come quickly, experts say.
The order calls for government agencies to tighten the reins on the use of AI throughout the economy. It requires companies that provide widely used models to conduct safety tests and report the results of those tests, and it mandates controls over security, privacy, intellectual property and potential bias in AI models.
The order requires several things of companies that create and use AI models, including safety testing of those models and sharing the results of the tests with government. It calls for standards and protections against the risks of AI, including intellectual property theft, cybersecurity threats, data privacy infringement and bias and discrimination.
“This is viewed as a pretty significant action by the [Biden] administration,” said Stephen Lilley, a partner at the law firm Mayer Brown in Washington.
The order is in line with ideas that have been discussed and contemplated over the past year, including the White’s House’s blueprint for an AI bill of rights.
“But I think there’s been a lot of parlor games of people trying to figure out, is the administration going to take a significant step or some more moderate steps to start?” Lilley said. “I definitely think that this action and what’s reflected in the [executive order] is on the more consequential end of the spectrum of what they could have taken on. They’ve gone well beyond just doing studies and guidance. There will be real, practical effects from the [executive order] in fairly short order.”
Most of the uses of AI in banks are already regulated, noted Dan Felz, partner at the law firm Alston & Bird in Atlanta.
“If we have a risk-scoring model that’s used to determine whether somebody gets a loan or not, that’s a process that’s already regulated and the fact that we bring in a robot to help doesn’t make the process less regulated,” Felz said. “Instead, it requires us to know, what is the robot doing? And also to look around our organization and say, how many more important decisions are being made by robots?”
Some banks are going through an inventory process to understand the places in the organization they are already using AI, traditional or advanced, Felz said.
“That’s been happening for a while now, and this executive order just supercharges it,” he said.
The effects of the executive order will certainly be felt in the areas of AI model explainability, copyright protection and potential for bias.
Black boxes, intellectual property and bias
The executive order calls for explainability and transparency in AI models. Banks can’t use “black box” AI algorithms to come up with reasons for denial of loans, for instance.
If banks can’t explain to their auditors how their AI models make decisions, “that’s going to be a very uncomfortable moment,” said Steve Rubinow, a faculty member in the Jarvis College of Computing and Digital Media at DePaul University and a former chief information officer of the New York Stock Exchange.
The executive order also instructs several government agencies to start addressing AI-related risks to intellectual property. This is something banks and their vendors need to be concerned about, pointed out Lewis Liu, founder and CEO of Eigen Technologies, which works with financial institutions including Goldman Sachs, Citigroup and Bank of America.
Major generative AI providers have already been accused of training their models with the copyrighted work of artists and writers without permission.
“Intellectual property is one of the most important things that an AI vendor can’t just willy-nilly take,” Liu said. “I think that is one of the big risks and big concerns that people like OpenAI have been trying to sweep under the rug with talk about the Terminator.”
Banks have to consider if they are using anything that “someone’s going to come to me later and say, ‘Hey, you know what? You owe me some money for that because you used stuff that you didn’t compensate me for’ and then you think, if I’d known that, I wouldn’t have done it in the first place,” Rubinow said.
The executive order’s requirements for models to be unbiased are critical, some say.
“I think the most significant provision for the financial sector in the [executive order] is ensuring that any algorithms and other AI systems that they’re using aren’t leading to biased results in loan applications,” said Alan Hayes, counsel at the law firm Akin in Washington. “I suspect that one thing this [executive order] will do is encourage or direct entities like the Department of the Treasury and the Consumer Financial Protection Bureau to think about how and where they can use their existing authorities to further regulate or manage bias and ultimately eliminate negative bias in the banking and financial sector.”
CFPB Director Rohit Chopra has repeatedly said banks need to comply with existing fair-credit rules when they use AI in lending decisions. The executive order encourages the CFPB to issue rules about how AI can be used for housing and credit decisions, which fits with work the agency has been doing. It recently put out a circular on the use of AI in credit decisions.
AI is famous for bias and discrimination, said Ala Shaabana, co-founder of the Opentensor Foundation, an organization committed to developing AI technologies that are open and accessible to the public. “And that is because as humans, we are biased. That’s just how we are. It doesn’t mean that somebody’s inherently bad or good. It just means that human bias will always play into AI because we are naturally biased creatures.”
Because open-source AI invites people from different walks of life to analyze the code and try to remove bias, “it means that we’re going to more likely than not create a more balanced AI that will most likely tackle most of the issues that are raised in this executive order,” Shaabana said.
“Bias is a big concern,” Liu said. “We have seen a lot of different banks and credit cards getting into trouble for gender and racial bias.”
Some AI models exacerbate racism, he said.
“If you type in, show me a CEO, it’s going to show you more white male CEOs than there actually are white male CEOs,” Liu said. “If you type in, show me a cleaner, it’s going to show you a certain ethnicity and a certain gender that is actually more than what is actually reality.”
Such patterns become especially problematic when they are used for customer interaction, he noted.
Here again, banks are used to being questioned about bias in their models.
“Banks are used to being scrutinized for gender and racial bias and credit decisions,” Liu said. “So they’re going to already have a process in place.”
What, if anything, should banks do now?
The White House’s executive order “doesn’t have anything that’s going to start applying tomorrow,” Felz said. Instead, it has a list of regulatory goals addressed to several federal agencies.
“Folks on Twitter were trying to poo-poo the order by saying things like, it’s a hundred pages of saying we’ll get started on this now,” Felz said.
Bank regulators will shape the executive order’s demands into guidance on how to handle risk management when using AI, and then regulations, Lilley said.
“Any financial institution that is using or planning to use AI will be well served to understand the steps that are coming, think through potential opportunities to be engaged in the processes that lead to any sort of rulemaking or issuance of guidance and then think internally about how to appropriately manage risk, both from the AI itself and associated legal risk,” Lilley said.
“This is an area where you may see some efforts to articulate standards or guidelines very quickly, and then agencies like the CFPB potentially moving to some sort of enforcement activity under its various authorities,” Lilley said. The same might be true for regulators like the Federal Reserve and the Office of the Comptroller of the Currency, he said.
“To the extent that sector risk-management agencies issue requirements or guidelines around security, I can see those filtering into examination requirements, supervisory approaches and potentially enforcement activity or guidance issued by the banking regulators,” Lilley said. “I can see that happening pretty quickly.”
Many banks use traditional AI throughout their organizations, including in cybersecurity, fraud detection and digital marketing. Many are experimenting and starting to deploy advanced AI such as large language models and generative AI.
The executive order does not intend to put a freeze on such projects, experts say.
“The administration is very clear,” Lilley said. “They want innovation, they want America to be the place where AI innovation happens. They don’t want to stop that innovation now. They just want it to be safe and secure innovation.”
Banks are expected to continue to innovate, develop new products and compete.
“You don’t want to sit and wait and see, be a late adopter because then you could fall too far behind,” Rubinow said. “And yet you don’t want to be a pioneer because that’s risky, especially in a regulated industry. So what I tell people is, put together a plan. Start small, start to learn what’s going on.”
Critical infrastructure providers like banks need to assess risks and be careful about what use cases they pursue. Many banks are providing generative AI only to employees, not to customers, to keep risks low.
“Banks need to make sure they understand across their entire enterprises what the different use cases they have are and have a governance approach that contemplates the risk associated with them and makes sound judgments about how to proceed with all of them,” Lilly said. “Maybe there are people using AI to develop software that the bank relies upon, or maybe there’s AI in the security tools that they use. The executive order is pretty interested in all of those. Banks need to think through how to calibrate those risks and how to organize themselves accordingly.”
The regulators could react to the executive order with enforcement actions, some say. Lilley worries about people still figuring out how to apply the principles of the executive order, “doing everything they think is right as needed, but then getting second-guessed and enforcement action after the fact because of some perceived issue that arose.”
Banks that have been using AI models for several years have a history of using model risk management, they have model governance teams, and they handle AI governance at very high standards, said Liu.
Such banks should be fine “as long as they are following their existing processes and being thoughtful,” he said.
“My general advice to clients in the financial sector and otherwise is, keep a very close eye on what’s happening in the global regulatory sphere in relation to artificial intelligence,” Hayes said.
“These requirements and proposed legislation are popping up all over the world. The global multinational financial institutions are going to have to thread a series of needles as time goes on and these regulations and proposed legislation begin to be finalized and enforced.”
A path forward
Financial institutions will study the executive order, “they’ll figure out where there’s ambiguity, and they’ll let the regulators know, ‘I don’t know what this means,'” Rubinow said. “‘I don’t know what I’m supposed to do with this. You’ve got to give me some more information.’ And some of the regulators will say: ‘You know what? We wish we knew. It’s too soon. We don’t know. We’ve got to figure this out and we’ve got to work with the other agencies because we wouldn’t want disparate oversight from agency to agency.’ The executive order says a lot of great things, but the follow-up will be important.”
Financial institutions want to know what the guardrails are, he said.
“The fact that the government is saying, ‘here’s our first attempt at guardrails with teeth,’ banks will welcome that because they’ll know what the boundaries are, and then whatever plans they had in place a month ago, they’ll look at those plans and we’ll say, ‘OK, it’s more risky to do this. We need to wait. It’s less risky to do this,'” Rubinow said.
He advises CIOs to start with use cases that are less risky, “where you’re not likely to trip over security and privacy and audit concerns and regulatory concerns, because those are very predictable,” he said. “You’ll figure it out as you go along. But start small and get your team to learn a lot, and then set up frameworks for inside the company how you’re going to manage the bigger projects. And then you’ll understand the guardrails as you go along, both the guardrails you impose on yourself and guardrails that are adopted by the industry and guardrails that are imposed by the government. And then you have a path forward.”