© 2024 Arizent. All rights reserved.

The 9/11 effect: Thinking about risk<CR>Abridged from research by Mark Adelson, Teresa Cho, Javier Villanueva, and Lisle Leonard of Nomura Securities

Too often, the quantitative methods used in creating structured finance securities fail to reflect the real world when it matters most, during times of stress. Models and their underlying assumptions drawn from "normal" conditions should not be expected to perform well during unusual and extreme conditions. But, in the structured finance context, the primary purpose for elaborate and sophisticated models is often to predict the performance of securitized assets during unusual and extreme conditions. Ironically, the structured finance community expects the most from its quantitative models when they are inherently at their weakest.

No quantitative models predicted the attack on the World Trade Center or its consequences for structured financings. This is hardly a shortcoming of the models. Rather, it illustrates the need for professionals to fully acknowledge the limitations of their models and to think beyond the pat answers that models supply. Although the attack was unpredictable, it was really just an example of the class of events called "catastrophes." Specific catastrophes are always surprises when they happen. Otherwise, people would take action beforehand to prevent them or to protect against them.

However, the occurrence of catastrophes is not surprising in the least. Much of history is the study of such events and their consequences. The lesson is clear: Professionals need to understand the insufficiency of their quantitative models to capture the effect of catastrophes. When models' underlying assumptions break down or cease to have predictive relevance, then the resources of judgment, imagination, experience, and common sense become the primary tools for making real world business decisions.

Examples of Weaknesses

The desire to construct technically rigorous models pushes us to use variables for which there are seemingly large data samples. For example, in constructing a quantitative model to describe the credit performance of residential mortgage loans, it is tempting to rely on the abundant and highly detailed data collected by private information vendors. Such data can provide an extremely comprehensive view of how mortgage loans perform during good and mildly recessionary times (such as 1990-91). Simple extrapolation can produce predictions about future performance under similar conditions. On the other hand, such data may not provide an equally reliable view of mortgage performance during more severe recessionary times. Although professionals may desire comparable data compilations covering the recessions of the early1980s, the mid-1970s, and the Great Depression, that desire remains unsatisfied. Therefore, in defining the development sample for a quantitative model, structured finance professionals run the risk of excluding performance under extreme (adverse) conditions. In technical terms, this is using a biased development sample for building a model. Here, academics have warned practitioners, but the warnings sometimes fall on deaf ears.

The relative infrequency of "catastrophes" creates great challenges for model builders. On a day-to-day basis, we observe phenomena that appear to be bound within certain ranges and relationships that appear stable. Interest rates, exchange rates, and commodity prices are examples of phenomena that generally appear range-bound. The relative long-term return on stocks and bonds is an example of a relationship that generally appears stable. After enough time, we are prone to conclude that such phenomena and relationships are immutable. Many have made such mistakes in the past...

For example, specifically in the structured finance arena, during the mid-1990s home equity and manufactured housing lenders embraced gain-on-sale accounting as a way to boost reported earnings per share. In recording gain upon the securitization of home equity loans or manufactured housing loans, lenders had to project prepayments. Those projections were drawn from prepayment models based on observations from the preceding few years. At the same time, the lending environment was experiencing secular change. Lenders were competing more aggressively and were becoming more assertive in soliciting borrowers to refinance their loans. Existing prepayment models failed to incorporate variables reflecting the heightened competition and the intensified solicitation. Accordingly, they systematically underestimated the prepayments that actually occurred. The recorded gains turned out to be illusory. The consequences were severe: many home equity and manufactured housing lenders went bust or ceased operations.

More recently, prepayment models for conforming mortgage loans generally under-predicted the refinancing activity that has just occurred. Unprecedented declines in interest rates over the past year simply went past the outer limits of the development sample for such models. Moreover, over the past few years, the primary mortgage market has undergone changes that could not have been predicted by any model. The role of mortgage brokers has grown significantly and the market has become increasingly driven by lenders rather than borrowers. At the same time, the Internet has promoted faster refinancing activity by improving borrowers' access to information. Collectively, these factors significantly limited the reliability and predictive power of prepayment models.

However, even before the drop in interest rates, prepayment models for conforming mortgage loans were far from perfect. Consider the fact that each major investment bank has its own prepayment model and that those models produce a very wide range of predictions. This is evidence that the prepayment process does not lend itself fully to modeling. The process changes as the efficiency of refinancing improves. In technical terms, the prepayment phenomenon is a non-stationary process.

The fluctuation of interest rates is another example of a phenomenon that challenges modeling. Certain applications that require simulated interest rate paths use a "random walk with mean reversion"-type of process a mechanical process that treats interest rate fluctuations as random variables whose distributions can be described. At first blush, this seems reasonable. However, with further consideration, a potential flaw becomes apparent: in treating interest rates as "random variables" for modeling purposes, we implicitly assume that its future states will be ruled by the laws of probability, just like roulette or a game of dice. This assumption might be wrong, especially in the case of short-term interest rates. Rather than obeying the laws of probability, short-term interest rates seem to be governed by the actions of Chairman Alan Greenspan and his colleagues at the Federal Reserve.

The fact that short-term interest rates are not ruled by the laws of probability hardly means that we cannot, or should not treat their fluctuations as random variables for modeling purposes. Rather, it means that we must remain mindful of having used an unrealistic assumption in the modeling process.

Back to the structured finance arena for a few more examples: generic credit scores based on data compiled by the national credit bureaus are often called FICO scores. The acronym FICO is derived from the name "Fair Isaac & Co.," which produces the statistical models that generate the credit scores. Many lenders use FICO scores as part of their lending processes and some incorporate FICO scores as part of their own proprietary scoring models. FICO scores are designed to express the likelihood that a consumer borrower will default. FICO scores have worked best in mainstream product areas and with borrower populations that mirror the population at large. When conditions are otherwise, lenders have experienced disappointment from their use of FICO scores. For example, in the high-LTV (125%) mortgage lending area, the actual frequency of defaults on loans originated in 1997 and 1998 was substantially higher than would have been implied by the borrowers' high FICO scores. The scoring models did not capture the impact of the borrowers' strong appetite for leverage. The models could not capture that effect because credit bureau databases do not contain data on leverage. Thus, the scoring models were missing a key factor.

The appeal of quantification the comfort of certainty and exactitude has drawn some market players to conclude that quantitative models are the essence of securitization. For example, some market participants have eloquently expressed the opinion that computers and models permit structured finance professionals to predict asset performance with great reliability and precision. It is not an inherently irrational viewpoint, but we argue it is wrong because it ascribes unrealistic capabilities to models and computers. That fact that quantitative models may be even worse at predicting the performance of corporations and corporate securities is a faulty basis upon which to conclude that models are reliable in absolute terms in the securitization arena.

There is no disputing that quantitative models are essential tools for securitization professionals. The models are an indispensable part of the securitization process. Nevertheless, we argue that reliance on modeling can go too far. No matter how well we study the historical performance of various asset classes we will never escape the limitations including limited precision and the inherent backward-looking nature of the modeling process.

Overcompensating

Following the attack on the World Trade Center, and the subsequent reports of deteriorating economic conditions, some market participants overreacted. Notwithstanding the huge magnitude of the tragedy in human terms, some market participants behaved as though they expected general financial and economic collapse. They had become so used to good times, that even slightly bad economic times appeared terrible to them.

What we have experienced since 9/11 has been rather mild compared to what investment grade tranches of most structured financings are able to withstand. It would take a rather severe and prolonged downturn before investment-grade classes of most securitizations would really face a material risk of default.

An unemployment rate in the 5% ballpark is hardly the end of the world. Double-digit unemployment is a much rougher prospect. A recession that lasts fewer than six calendar quarters is not especially troubling. One that lasts more than a dozen consecutive quarters would create much greater difficulties. Right now, the strong consensus is that the economy will recover in 2002 or 2003. Nobody expects that recession will persist until 2004 or 2005. The bottom line: a one or two year recession with mid-single-digit unemployment rates should not pose a significant threat to investment-grade tranches of securitizations.

In conclusion, our mathematical models generally will fail to capture the impact of rare and severe situations like the attack on the World Trade Center. Their rareness makes them outliers and their severity encourages us to discard them as aberrations. In building models, we allow ourselves to use biased samples that overweight good times. We artificially simplify non-stationary processes. We choose distribution forms that are convenient, even if their tails are too thin. If we find it too difficult to quantify a seemingly relevant factor, we are prone to simply ignore it. Political and social factors rarely appear as variables. And yet, all this is acceptable, provided that we appreciate our models' limitations. We must not ask our models to carry more than they can bear. Certainly, after 9/11, we have to have heightened sensitivity to such issues.

Quantitative models have a long track record of underestimating risk. This suggests that certain securities, which bear disproportionate credit or prepayment risk, will be more often rich than cheap. In other words, if the market relies heavily on a model for pricing a prepayment or credit risk, the "up in quality" trade will be the better strategy most of the time.

In ABS backed by new asset classes, there is opportunity to differentiate between situations where a modeling process has driven credit enhancement levels and those where it has not. When a modeling process has been the driving force, greater caution is warranted. The danger will be in underlying assumptions. Even when all the underlying assumptions are reasonable, equally reasonable alternative assumptions could produce drastically different answers in some models. Conversely, there will be less risk and potential opportunities in lower-rated tranches or in riskier asset classes when a quantitative model is used but is not necessarily the centerpiece of an analysis.

Switching viewpoint from that of a portfolio manager to that of an investment strategist, risk manager, or CFO changes things considerably. Most broadly, the modeling bias to underestimate risk argues toward limiting exposure to certain product areas (i.e., those most exposed to model risk). It also argues for imposing more intensive controls and oversight in those areas. However, this does not mean that investors should shy away from new asset classes or exotic ABS. Rather, it means that investment strategists, risk managers, and CFOs should consider exotic ABS and deals backed by new assets on a case-by-case basis with particular attention to how risks have been analyzed.

A model cannot justify poor business results any more than it can deserve credit for success. Responsibility for making business decisions rests on professionals, not models. Professionals will do their jobs better if they augment their models with the equally powerful tools of judgment, imagination, experience, and common sense.

For a copy of the entire report, call Teresa Cho at 917-639-4307

For reprint and licensing requests for this article, click here.
MORE FROM ASSET SECURITIZATION REPORT