research paper

Sensitivity based Neural Network Explanations

Kay Giesecke

Founder, Chairman and Chief Scientist, Professor at Stanford University

Enguerrand Horel

Senior Research Scientist, Upstart

Lidia Mangu

Head of Machine Learning Center of Excellence, JPMorgan

Tao Xiong

Machine Learning Center of Excellence, JPMorgan

Virgile Mison

Machine Learning Center of Excellence, JPMorgan

Although neural networks can achieve very high predictive performance on various different tasks such as image recognition or natural language processing, they are often considered as opaque "black boxes". The difficulty of interpreting the predictions of a neural network often prevents its use in fields where explainability is important, such as the financial industry where regulators and auditors often insist on this aspect. In this paper, we present a way to assess the relative input features importance of a neural network based on the sensitivity of the model output with respect to its input. This method has the advantage of being fast to compute, it can provide both global and local levels of explanations and is applicable for many types of neural network architectures. We illustrate the performance of this method on both synthetic and real data and compare it with other interpretation techniques. This method is implemented into an open-source Python package that allows its users to easily generate and visualize explanations for their neural networks.

This work was presented at the NeurIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy, Montréal, Canada, December 2018.

See also JP Morgan Publication Page.

Request Download

About the Speaker


Kay Giesecke

Founder, Chairman and Chief Scientist, Professor at Stanford University

Kay Giesecke is the Founder, Chairman and Chief Scientist at Infima. He is also Professor of Management Science & Engineering at Stanford University, the director of the Advanced Financial Technologies Laboratory, and the director of the Mathematical and Computational Finance Program. Kay serves on the Governing Board and Scientific Advisory Board of the Consortium for Data Analytics in Risk. He is a member of the Council of the Bachelier Finance Society.

Kay is a financial technologist interested in solving the challenging modeling, statistical, and computational problems arising in fixed-income and credit markets. Together with his students at Stanford, Kay has pioneered the core elements of the deep learning and computational technologies underpinning Infima’s solutions.

Kay’s research has won several awards, including the JP Morgan AI Faculty Research Award (2019) and the Fama/DFA Prize (2011), and has been funded by the National Science Foundation, JP Morgan, State Street, Morgan Stanley, Swiss Re, American Express, Moody's,and several other organizations.

Kay has advised several financial technology startups and has been a consultant to banks,investment and risk management firms, governmental agencies, and supranational organizations.

Enguerrand Horel

Senior Research Scientist, Upstart

He obtained his PhD in Computational and Mathematical Engineering at Stanford University, where he developed and analyzed rigorous statistical approaches to explaining the behavior of machine learning models, especially deep learning. During his doctoral studies he worked in the AI Research teams at JP Morgan and Apple.

Lidia Mangu

Head of Machine Learning Center of Excellence, JPMorgan

Tao Xiong

Machine Learning Center of Excellence, JPMorgan

Virgile Mison

Machine Learning Center of Excellence, JPMorgan