Papers Read on AI

Papers Read on AI header image 1
September 6, 2022  

Explaining Explanations: Axiomatic Feature Interactions for Deep Networks

September 6, 2022

Recent work has shown great promise in explaining neural network behavior. In particular, feature attribution methods explain which features were most important to a model's prediction on a given input. However, for many tasks, simply knowing which features were important to a model's prediction may not provide enough insight to understand model behavior. The interactions between features within the model may better help us understand not only the model, but also why certain features are more important than others. In this work, we present Integrated Hessians, an extension of Integrated Gradients that explains pairwise feature interactions in neural networks. Integrated Hessians overcomes several theoretical limitations of previous methods to explain interactions, and unlike such previous methods is not limited to a specific architecture or class of neural network. Additionally, we find that our method is faster than existing methods when the number of features is large, and outperforms previous methods on existing quantitative benchmarks. Code available at this https URL

2020: J. Janizek, Pascal Sturmfels, Su-In Lee

Interaction, Artificial neural network, Integrated circuit, Color gradient

https://arxiv.org/pdf/2002.04138v3.pdf