Measuring Financial Risk - dummies

By Aaron Brown

If you observe a financial risk management department in a financial institution, you may conclude that its main job is measuring risk. Indeed, that’s what most of its people are doing most of the time. Its main tangible output seems to be reports filled with measurements.

You may ask why a financial institution needs a risk department. After all, lots of departments – accounting, audit, financial control, and information technology (IT) – already measure things, and business units produce their own measurements and reports. What’s the point of one more set of measurements? Why not just have a few risk people to set risk policies and make risk decisions, and either fire the rest or move them into the existing measurement groups?

The answer is that all those other groups are measuring things that exist today, while the risk management department attempts to put useful measurements around what might happen tomorrow. Measuring tomorrow’s risk is a fundamentally different task, one that requires its own set of tools and skills. You can compare it to running a board of elections to count the votes cast in today’s election versus a pollster trying to guess who might win tomorrow’s election; or an official keeping score in a football match being played now versus a bookmaker setting a betting line for a future match.

When measuring things that exist today, the most important attributes of the measurement are generally accuracy and precision. Risk managers would like to have both accuracy and precision, but they’re difficult to come by when you’re looking at the uncertain future.

The first demand of risk managers is risksensitive measurements – measurements that reliably go up when risk increases and down when risk decreases. For example, suppose that you’re designing a measure of liquidity risk for a financial institution. Liquidity is an expansive concept and includes situations such as not having the cash to meet an investor redemption demand, not being able to sell an asset that no longer fits in your portfolio and not having the accurate market prices you need for legal and regulatory purposes for your assets.

So, no single number can capture all aspects of liquidity risk. However, you don’t want an 80-page report of numbers that no one will ever use. Your goal is to put together a set of metrics that are short, clear and simple enough to be useful but sophisticated enough to capture the full range of liquidity risks.

If your report is miscalibrated – giving twice as much attention to a relatively minor liquidity risk than to a major one or sounding alerts when safety margins are still adequate – it still can be very useful, especially compared to not having any systematic monitoring at all. It can also be improved. However, if your report doesn’t change in response to something that affects liquidity – if it isn’t risk sensitive to some factor – that factor becomes invisible, and invisible risks can kill.

The rule in risk measurement is to make sure that every relevant factor you can measure is included in a directionally correct way. This phrase means including some badly understood factors, factors that can’t be easily measured, factors that have a lot of measurement error, and factors that gum up the system because they’re available at non-standard times or in nonstandard formats.

IT people, auditors, compliance people and lawyers may well fight you on including all these factors because it makes their jobs harder. Other people may resist including all this data because they’re offended by messiness.

You often won’t be able to explain why the measures have changed or what to do to get them back to their old values. But a risk manager has to stand up for risk-sensitive measures, or he becomes an appearance-of-risk manager. A neat, consistent risk report that everyone loves is a report based on proxies, assumptions, exclusions, stale data, smoothing and other features that make important risks invisible.