×
Skip to main content
Loyola University Chicago AI Business Consortium
Housed in the Loyola Business Leadership Hub
Loyola University Chicago logo in header of site

Performance Is Not All You Need

Developing and Validating High Risk Machine Learning

We are experiencing unprecedented adoption of complex machine learning models across industries. However, this rapid adoption has been accompanied by a string of high-profile model failures, casting a shadow over the enthusiasm surrounding artificial intelligence. Amidst this backdrop, the battle-tested principles and lessons learned from over a decade of model risk management practices in the banking sector offer a guiding light on how to harness the power of machine learning safely.

One of the cornerstones of model risk management is model validation, a critical safeguard that extends beyond the traditional realms of software development. Unlike deterministic code, machine learning models are inherently probabilistic and prone to mistakes, necessitating a proactive approach to identify and mitigate risks before deployment.

The key components of model validation are twofold: conceptual soundness and outcome analysis. Ensuring conceptual soundness involves a rigorous examination of the data quality and suitability, variable selection, model interpretability, and benchmarking against alternative methodologies. This process is essential to validate the model's underlying assumptions and suitability for its intended use case.

Outcome analysis, on the other hand, transcends mere performance metrics. It entails a comprehensive assessment of model weaknesses, including the identification of conditions that impact output reliability, robustness against noise and corrupted data, and performance under usage drift – scenarios where the operational environment deviates from the training data.

As the frontier of AI continues to push boundaries, adhering to these time-tested principles of model validation becomes paramount. By embracing a proactive model risk management approach, we can unlock the transformative potential of AI while safeguarding against its pitfalls, paving the way for confidence deployment of safe and sound models.

About the keynote speaker

Agus Sudjianto

Agus Sudjianto is the former executive vice president, head of Model Risk and a member of Management Committee at Wells Fargo, where he is responsible for enterprise model risk management. 

Prior to Wells Fargo, Agus was the modeling and analytics director and chief model risk officer at Lloyds Banking Group in the United Kingdom. Before joining Lloyds, he was an executive and head of Quantitative Risk at Bank of America. 

Prior to his career in banking, he was a product design manager in the Powertrain Division of Ford Motor Company. 

Agus holds several U.S. patents in both finance and engineering. He has published numerous technical papers and is a co-author of Design and Modeling for Computer Experiments. His technical expertise and interests include quantitative risk, particularly credit risk modeling, machine learning and computational statistics. 

He holds masters and doctorate degrees in engineering and management from Wayne State University and the Massachusetts Institute of Technology.

Developing and Validating High Risk Machine Learning

We are experiencing unprecedented adoption of complex machine learning models across industries. However, this rapid adoption has been accompanied by a string of high-profile model failures, casting a shadow over the enthusiasm surrounding artificial intelligence. Amidst this backdrop, the battle-tested principles and lessons learned from over a decade of model risk management practices in the banking sector offer a guiding light on how to harness the power of machine learning safely.

One of the cornerstones of model risk management is model validation, a critical safeguard that extends beyond the traditional realms of software development. Unlike deterministic code, machine learning models are inherently probabilistic and prone to mistakes, necessitating a proactive approach to identify and mitigate risks before deployment.

The key components of model validation are twofold: conceptual soundness and outcome analysis. Ensuring conceptual soundness involves a rigorous examination of the data quality and suitability, variable selection, model interpretability, and benchmarking against alternative methodologies. This process is essential to validate the model's underlying assumptions and suitability for its intended use case.

Outcome analysis, on the other hand, transcends mere performance metrics. It entails a comprehensive assessment of model weaknesses, including the identification of conditions that impact output reliability, robustness against noise and corrupted data, and performance under usage drift – scenarios where the operational environment deviates from the training data.

As the frontier of AI continues to push boundaries, adhering to these time-tested principles of model validation becomes paramount. By embracing a proactive model risk management approach, we can unlock the transformative potential of AI while safeguarding against its pitfalls, paving the way for confidence deployment of safe and sound models.

About the keynote speaker