From Black Box to Glass Box: Interpreting your Model

December 04, 2024 29 min Free

Description

In this session, Zachary Carrico explores techniques to transform complex neural networks from "black boxes" into interpretable "glass boxes." The talk covers various neural network-specific interpretability techniques, including saliency maps, integrated gradients, Grad-CAM, SHAP, and activation maximization. It combines theoretical explanations with practical demonstrations to help attendees improve transparency and trust in neural network predictions, reduce bias, and enhance model performance. The presentation also touches on considerations for choosing interpretability techniques, addressing biases through data and algorithmic adjustments, and communicating findings to stakeholders.