Interest rate risk (IRR) analysis is not intended to dictate how management should react to changes in interest rates but should be used as a tool to understand how current actions may affect future earnings across varying rate scenarios. Therefore, deploying successful strategic initiatives begins first with building the confidence that decisions are made based on accurate forecasts. This gives stakeholders comfort that their decisions will be effective. In an asset/liability management (ALM) model, stakeholders need to know that earnings are forecasted to react appropriately to changes in interest rates.
In short, what stakeholders want to see is proof of a correct implementation of a functioning model, capable of identifying and modeling all balance sheet characteristics while applying empirically based assumptions to accurately forecast IRR exposure. The key components for an accurate and effective model are data quality, model setup and segmentation, model assumptions, and risk reporting.
Ensuring Data Quality
Extracting quality data is a core element of IRR modeling as it plays an important role. The output is only as good as the input, as the old adage, “garbage in, garbage out”.
The precision of an ALM simulation model depends on the quality of the input data and the category designs that capture the data. ETL, which stands for extract, transform and load, is a data integration process that combines data from multiple data sources (core system extracts, GL, and bond accounting) and transforms it into consistent data to load into the ALM model.
During this process in the transform staging area, data undergoes some type of processing including cleansing, filtering, translations, formatting, and/or aggregations. In an ALM model, the data is aggregated so like-term products can be mapped to capture all meaningful positions and exposures. For example, are interest-only loans appropriately mapped to a model category that is set up accordingly? Available data fields need to be reviewed to make sure the correct fields are being utilized and loaded.
As a final step, reconciliation needs to occur at both the data extract level and after the data is loaded to confirm no data is lost in the process. Any variances to the general ledger need to be tracked and compared to defined thresholds.
Effective Model Set-up and Segmentation
General ledger charts of accounts are rarely detailed enough to provide sufficient insight into asset/liability management. The goal is to segment key behaviors and optionality. For example, investments need to be segmented by callability, this enables the modeler to confirm appropriate treatments of call options in reporting. Loans, for example, need to be segmented based on amortization types and balloons, collateral or term for mortgages, and repricing indices. On the liability side, segment non-maturity deposits by tiers when pricing strategies are different.
This segmentation helps set up the proper chart of account settings to effectively outline the contractual aspects of data, as well as the forecasting and handling of new balances. It also allows assumptions to be applied more specifically to the segmented behavior.
With any form of modeling, for the results to be effective and accurate the institution must confirm that the assumptions utilized are both reasonable and supportable. Using unrealistic or overly conservative assumptions can result in an inaccurate picture of an institution’s risk exposure, potentially resulting in flawed strategies or missed opportunities.
The development of key asset and deposit assumptions must have a systematic approach. Assumptions need to be derived based on the following.
- Assumptions need to be based on institution-specific empirical data.
Historical institution-specific data needs to be analyzed to provide the appropriate assumptions. The time period analyzed should be dependent on the assumption, for example, if we are looking to derive historic deposit repricing assumptions we want to analyze over multiple rate cycles to understand the difference in pricing strategies for rising and falling rates. The use of industry assumptions or peer data without considering institution-specific factors is often deemed unsupportable.
- Assumptions need to be specific to the product category.
Institutions tend to aggregate products; this over-simplification of the balance sheet categories can lead to incorrect assumptions. For example, residential mortgages and mortgage-backed securities should be modeled based on the underlying collateral. A 15-year mortgage is going to prepay differently than a 30-year mortgage. Making a prepayment assumption to a mixed portfolio will inaccurately forecast prepayments.
- Consider qualitative overlays.
For institution-specific historical data, consider if there needs to be a management overlay based on the current market. For example, for non-maturity deposit runoff, does the impact of Covid transient balances need to be adjusted going forward? For betas, historic betas may not represent the impact of inflation.
- Update assumptions regularly.
Stale and unsupported assumptions are a common practice. Assumptions should be derived from a systematic and repeatable process allowing for updates to assumptions to be derived in a timely manner. For example, new business pricing can change as rates change, and if the model owner is not updating these assumptions regularly these pricing assumptions are not supportable.
- And finally, the most important one is to test and present
An ALM Model has many assumptions, but management needs to understand how these assumptions drive the results and how the results would change based on changes in the assumptions. Sensitivity testing around the most impactful assumptions needs to be conducted regularly and presented to management.
Testing Assumptions is a vital component
Any financial model has assumptions, and even when we take all the right steps to determine a supportable assumption it is still our “best guess”. Therefore, it is vital to quantify the risk to the key assumptions and educate management. Management needs to understand how these assumptions drive the results and how the results would change based on changes in the assumptions. Sensitivity testing around the most impactful assumptions needs to be conducted regularly and presented to management. This quantifies the exposure to results from the assumptions.
Another key test is to back-test assumptions. Back-testing confirms the current practice of deriving assumptions is within reason and supports strategic decisions. A back-test should also be performed on model results. The back-test compares forecasted assumptions or forecasts to actuals, providing an additional layer of confidence in the process and the model.
The Bottom Line
Your degree of model confidence can make or break your relationship with management and ultimately determine their buy-in. To earn the trust of your stakeholders, your model should feature quality data inputs, proper product segmentation, and reasonable and supportable assumption inputs based on empirical data. Finally, test assumptions and quantify the exposure to variables and back-test model results to prove accuracy repeatedly.
If you are planning on maximizing your IRR model’s potential to win over your stakeholders, feel free to download our guide here.
Written by Chris Mills, Senior Director
About the Author
Chris has over 25 years of experience in financial institution modeling and has been leading MVRA’s model validation services and core deposit/loan analyses teams supporting strategic balance sheet and risk management for over 8 years. She brings a wide range of expertise across treasury, asset/liability management, and model risk assessment processes. Experienced with multiple ALM models, she also is skilled in capital modeling, capital markets, liquidity and contingency funding planning, funds transfer pricing, model risk governance practices, and investment banking.