DL-Backtrace: Unveiling Model Decisions
Description
In today's rapidly evolving AI landscape, deep learning models have become increasingly complex and opaque, often functioning as "black boxes" that make decisions without transparent reasoning. This lack of explainability raises concerns in mission-critical applications where understanding the "why" behind a model's decision is as important as the decision itself. This talk introduces DL-Backtrace, a new technique from AryaXAI designed to explain any deep learning model, including LLMs, traditional computer vision models, and more. The presentation covers the algorithm, benchmarking results against techniques like SHAP, LIME, and GradCAM, and its application across various DL models like LLMs (Llama 3.2), NLP (Bert), CV (ResNet), and tabular data.