Accounting

Are Bank Regulators Missing the Big Picture?

A new strategy to predict a bank’s risk of failure appears to outperform what regulators consider their gold standard for bank strength.

October 02, 2019

| by Edmund L. Andrews

Image
A man looks at a stock quotation board outside a brokerage. Credit: Reuters/Toru Hanai

An unconventional approach may do a better job at predicting bank failures than the main tools regulators currently use, new research shows. | Reuters/Toru Hanai

Ten years after the mortgage meltdown kicked off the Great Recession, it’s easy to forget just how costly bank failures can be.

From 2008 to 2013, the Federal Deposit Insurance Corporation paid $73 billion to shut down or arrange shotgun marriages for insolvent banks. On top of that, the Federal Reserve and Treasury mobilized hundreds of billions of dollars to keep the rest of Wall Street afloat.

But while federal regulators did tighten bank capital requirements after the crisis, it’s not clear how much better they have become at predicting bank failures or assessing how much capital a bank really needs to cover potential losses.

Now, a new study led by Charles Lee at Stanford Graduate School of Business suggests that an unconventional approach can do a much better job at predicting bank failures — especially those several years down the road — than the main tools that regulators currently use.

Lee, a professor of accounting, has devoted much of his work to identifying hidden patterns in financial markets that investors can exploit to earn above-average returns.

To that end, he and his coauthors — Yanruo Wang at Nipun Capital and Qinlin Zhong at Renmin University of China — began to question the standard tools for anticipating bank failures.

The result: a new strategy based on the overall volatility of each bank’s loan portfolio. Banks whose default rates are more volatile, and banks that are less diversified across loan categories, are at greater risk of being blindsided by surging default rates in economic downturns. During the last financial crisis, banks and regulators alike were shocked that surging defaults in one area — subprime mortgages — could end up crippling the entire financial system. At one point during the crisis, even the biggest blue-chip corporations couldn’t borrow money for more than 24 hours at a time.

The question isn’t just whether a bank makes risky loans or whether it’s too concentrated in one or two categories, such as mortgages or construction loans.

The new study, based on data from more than 500 bank failures from 2003 through 2017, identified two largely overlooked risk factors that greatly magnify a bank’s problems when things begin to go bad.

The first is the volatility of the default rates in its main categories of lending: How much do default rates in the bank’s main loan categories vary from quarter to quarter? It turns out that the volatility of the default rate for a given loan category is a bigger red flag than its average default rate.

The second big risk factor is whether a bank’s main loan categories have correlated default rates that are likely to spike up in unison, even though the loan types are different.

Taken together, the researchers found, the combination of volatile and correlated default rates becomes a combustible mix that can go very badly very suddenly.

Assessing Banks

The current regulatory gold standard for a bank’s health is called the Tier 1 Capital Ratio, which measures how much equity capital a bank has to cover potential losses and keep depositors whole. The ratio is based on the bank’s total equity capital in relation to its total volume of “risk-weighted assets.” Banks that hold a high percentage of loans in categories that have high default rates typically need more capital.

Quote
We want to measure the contagious nature of defaults. The global banking system is very interconnected.
Attribution
Charles Lee

But that gold standard frequently failed during the financial crisis. In the run-up to the crash, for example, Silver State Bank in Nevada enjoyed soaring profits and a Tier 1 ratio that qualified it as “well capitalized” for regulators.

What the Tier 1 measure didn’t capture, Lee says, was that Silver State had made 83% of its loans in two closely related categories: construction and non-residential real estate. Historically, both categories have had volatile default rates — when things go bad, they go very bad. Worse yet, the two categories have highly correlated default rates. If defaults on non-residential real estate loans jump, defaults on construction loans often follow suit.

By Lee’s measure of loan portfolio risk, Silver State was severely undercapitalized at the end of 2006 and had a high risk of failing before the end of 2008. Silver State did in fact collapse in September 2008, costing the Federal Deposit Insurance Corporation between $450 million and $550 million.

“A single construction loan may not be risky, but a whole portfolio of loans tied to construction and real estate development could be very risky because their default rates vary together,” Lee says. “It’s the whole idea of diversification. You want banks to be diversified across loan types, because contagion is about correlation.”

A New Measuring Stick?

Although Silver State is an extreme example, many banks’ assets are heavily concentrated in a few loan categories. By analyzing the data on more than 500 bank failures, the researchers found compelling evidence that their measure provided a much better explanation of the bank failures than the Tier 1 ratio.

Specifically, Lee and his colleagues report, the Tier 1 ratio was able to anticipate only about 17% of the variation in bank failures that occurred from one year to the next. By contrast, their measure of loan portfolio risk predicted almost 25% of the variation — a major improvement. The new measure outperformed the Tier 1 ratio even more in predicting the likelihood of bank failures two, three, or even five years in the future. It also outperformed the so-called CAMELS rating, another widely used tool that measures indicators from capital and liquidity to management efficiency.

What that means, says Lee, is that some banks may need two or three times as much capital as the standard guidelines would suggest.

But the reverse can also be true. As measured by the Tier 1 ratio, the researchers found, JPMorgan Chase appeared shaky at the end of 2006 because a big share of its loans were in historically risky categories. By the new measure, however, JPMorgan Chase had almost no risk of failure because its main loan categories did not have volatile default rates and weren’t very correlated with each other. As it happened, JPMorgan Chase weathered the storm much better than any of its rivals.

Lee and his coauthors find that their new risk measure also has predictive power for the stock price performance of banks. During the financial crisis period (2007-2011), banks that scored in the lowest 10% of their risk score earned average annualized returns that were 40.2% higher than banks that ranked in the highest 10%. In fact, a majority of the highest risk decile firms failed while the firms in the lowest risk decile experienced almost no failures.

Lee argues that the standard regulatory approach essentially misses the forest for the trees. It focuses intensely on the riskiness of the loans themselves, as measured by the average default rate, but overlooks the riskiness of the loans across time, and in relation to each other.

“We want to measure the contagious nature of defaults,” Lee says. “The global banking system is very interconnected.”

For media inquiries, visit the Newsroom.

Explore More