Technology & AI

Trust but Verify: Peeking Inside the “Black Box” of Machine Learning

If a complex data analysis tool can’t explain its decisions, how do we know it’s accurate — or fair?

October 06, 2022

| by Dave Gilson
Illustrated shapes floating outwards left and right from a central structure. Illustration by Khyati Trehan.

Lending is one of many fields where black-box machine learning models might find new insights | Illustration by Khyati Trehan.

Artificial intelligence can be a powerful tool for analyzing massive amounts of data, finding connections and correlations that humans can’t. However, unlike a person solving a math problem, many AI models can’t easily explain the steps they took to reach their final answers. They are what’s known in computer science as black boxes: You can see what goes in and what comes out; what happens in between is a mystery.

The black-box problem is baked into many machine learning models, explains Laura Blattner, an assistant professor of finance at Stanford GSB. “The power of the technology is its ability to reflect the complexity in the world,” she says. But not being able to fully understand the resulting intricacy of the model raises practical, legal, and ethical questions. “If these black boxes are being used to make a high-stakes decision in lending, insurance, healthcare, or the judicial system, we have to decide whether we feel comfortable not knowing exactly why the decision was being made.”

Lending is one of the many fields where black-box machine learning models might find new insights in complex data. Currently, credit scores and loan decisions are often based on a few dozen variables. An AI-driven model looking at more than 600 variables might weigh risk more accurately, which would benefit both cautious lenders and borrowers who might otherwise be rejected. In theory, AI could make consumer lending not only more precise but more fair.

Quote
If these black boxes are being used to make a high-stakes decision in lending, insurance, healthcare, or the judicial system, we have to decide whether we feel comfortable not knowing exactly why the decision was being made.
Attribution
Laura Blattner

But U.S. lenders aren’t rushing to embrace these tools: If they can’t explain how they’re evaluating loan applicants, they could run afoul of fair lending rules. “They’re not going to be willing to take the risk, especially not in a more sensitive lending area like mortgages,” Blattner says. And if federal regulators conclude that these new tools aren’t trustworthy, “I think that spells a very different future for the use of AI in consumer lending.”

However, there are ways to take a black box’s output and work backward to figure out how it was generated. Blattner and assistant professor of operations, information, and technology Jann Spiess recently collaborated with FinRegLab, a nonprofit research center, to assess several tools that try to explain credit underwriting models’ predictions about individual applicants as well as minorities and other demographic groups.

“We came away cautiously optimistic,” she says. “If you pick the right tool, it is good at handling the complexity.” They also found that increased transparency can be balanced with performance. “The more complex black-box models were more accurate in predicting default, but they were also more equal across demographic groups, which was surprising,” she says.

Even if this type of AI is okayed by regulators and adopted by lenders, Blattner says people need to be part of the equation. “You can’t just blindly pick a software tool off the shelf and hope it works,” she says. Users must continually test and evaluate their models to ensure they’re working properly.

And while black boxes can perform superhuman calculations, we still may want a loan officer, doctor, or judge to have the final say.

“In all of these cases,” Blattner says, “the human wants to know why the AI made a certain recommendation so that they can let that influence their decision-making one way or the other.” 

For media inquiries, visit the Newsroom.

Explore More