Know Your Models
On October 10, 2019, we held our second annual U.S. Pricing & Valuations conference in New York City. Following the conference, we've published a series of dicussion summaries from the afternoon's panels and presentations. We hope you find value in these key takeaways.
The following is a collection of key discussion points as summarized by IHS Markit. The views noted are not directly attributed to any particular panel participant or their respective firms.
Know Your Models Panel Summary
Our Know Your Models panel was moderated by Keldon Drudge, vice president and head of the Quantitative Analytics Group at IHS Markit, and comprised of the following investment banking quantitative models experts:
- Alexander Denev, Head of AI - Financial Services Advisory, Deloitte LLP
- Louis Scott, Federal Reserve Bank of New York
- Manoj Singh, Managing Director, Model Risk Management, Bank of America
- Greg Yuhas, Director of Quantitative Analysis, Capital One
Wall Street has been using quantitative models for pricing and risk for nearly 40 years, but regulators started paying more attention to this function after the 2008 financial crisis. The US Federal Reserve and Office of the Comptroller of the Currency issued SR 11-7, its guidance on model risk management, in 2011.
More recently the definition of risk models is starting to stretch and include both simpler quantitative calculations and more complex capabilities such as artificial intelligence or machine learning models, which were not in the picture before.
Now models can even include chatbots, fraud detection programs and anti-money laundering algorithms. After all, chatbots are created using statistical work.
To contend with these changes, market professionals should look at aspects including:
- Stringency of model validation
- Whether a model is being retrofitted
- The impact of the model both downstream and upstream in the risk management process
- Whether and how AI can be used to validate inputs to the model
- How feature extraction, powered by AI, works in a model
The stringency of model validation is based on:
- How entrenched the model is in your business and your risk management
- How much and how often the model is being tested, validated, documented and reviewed
- How critical the model is to your business
- Whether the model contains CCAR and pricing/risk elements
- Who is using the model and how they are using it. For instance, traders using models for pricing get more scrutiny than research analysts using models for publication
Retrofitting a model means taking a model that was built for a specific function and applying it to a function that is somewhat similar, but not exactly the same. The danger in this practice is that the model may not really be suited for the risk or analytics needs of the market activity it frames.
Looking at a model's impact throughout the risk management process, the model may not seem as important at first, but on further consideration of all the metrics that model calculates for different parts of the business, that model could actually be extremely important - more important than you realize.
Using AI to validate inputs to the model can:
- Eliminate more tedious and time-consuming parts of model validation
- Effectively mean you're using another model, in the form of AI, to validate your model. So you could have two models to validate, in the end
Using AI to perform feature extraction means:
- Applying AI to try to determine what other variables can be added to the model
- Applying AI to answer why certain variables have certain effects in the model
Aside from using AI for validation, other questions to ask about validating models include:
- Are the assumptions reasonable for the current market environment?
- Has the market environment or regime changed since the model was originally created?
- Intrinsic risk - is the model extremely complicated?
- Outcome analysis - How would it have performed during the Great Financial Crisis? Some models are better suited than others for outcome analysis.
- Determination of circumstances when they can't/won't work
- Determining how to break the model
- Conceptual soundness
- Vendor models require the same level of model validation as in-house models
- Ongoing model validation - Simple daily auto validation tasks for simple models can help to identify unexpected breakages
Other miscellaneous concerns around knowing your models include:
- Finding talent to perform model validation work - quants are less interested in this role, and you need someone with prior experience in validation work
- Benchmarking model outcomes
- Calibrating and back-testing pricing models against actual traded levels
Read more summaries from the 2019 U.S. Pricing & Valuations Conference.
S&P Global provides industry-leading data, software and technology platforms and managed services to tackle some of the most difficult challenges in financial markets. We help our customers better understand complicated markets, reduce risk, operate more efficiently and comply with financial regulation.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.