Skip to content

Latest commit

 

History

History
108 lines (86 loc) · 15.1 KB

interpretable-and-explainable-ai-xai.md

File metadata and controls

108 lines (86 loc) · 15.1 KB

Interpretable & Explainable AI (XAI)

XAI

  1. A curated document about XAI research resources.

  2. Interpretability and Explainability in Machine Learning course / slides. Understanding, evaluating, rule based, prototype based, risk scores, generalized additive models, explaining black box, visualizing, feature importance, actionable explanations, casual models, human in the loop, connection with debugging.

  3. Explainable Machine Learning: Understanding the Limits & Pushing the Boundaries a tutorial by Hima Lakkaraju (tutorial VIDEO, youtube, twitter)\

  4. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead by Cinthia rudin

    1. A great talk on the topic by Shir Meir Lador
  5. explainML tutorial

  6. When not to trust explanations :)

  7. From the above image: Paper: Principles and practice of explainable models - a really good review for everything XAI - “a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions. From an organization viewpoint, after motivating the area broadly, we discuss the main developments, including the principles that allow us to study transparent models vs opaque models, as well as model-specific or model-agnostic post-hoc explainability approaches. We also briefly reflect on deep learning models, and conclude with a discussion about future research directions.”

  8. Book: interpretable machine learning, christoph mulner

  9. (great) Interpretability overview, transparent (simultability, decomposability, algorithmic transparency) post-hoc interpretability (text explanation, visual local, explanation by example,), evaluation, utility.

  10. Medium: the great debate\

  11. Paper: pitfalls to avoid when interpreting ML models “A growing number of techniques provide model interpretations, but can lead to wrong conclusions if applied incorrectly. We illustrate pitfalls of ML model interpretation such as bad model generalization, dependent features, feature interactions or unjustified causal interpretations. Our paper addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research.” - mulner et al.\

  12. *** whitening a black box. This is very good, includes eli5, lime, shap, many others.

  13. Book: exploratory model analysis

  14. Alibi-explain - White-box and black-box ML model explanation library. Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.\

  15. Hands on explainable ai youtube, git

  16. Explainable methods are not always consistent and do not agree with each other, this article has a make-sense explanation and flow for using shap and its many plots.\

    Keras-vis for cnns, 3 methods, activation maximization, saliency and class activation maps

  17. The notebook! Blog

  18. More resources!

  19. Visualizing the impact of feature attribution baseline - Path attribution methods are a gradient-based way of explaining deep models. These methods require choosing a hyperparameter known as the baseline input. What does this hyperparameter mean, and how important is it? In this article, we investigate these questions using image classification networks as a case study. We discuss several different ways to choose a baseline input and the assumptions that are implicit in each baseline. Although we focus here on path attribution methods, our discussion of baselines is closely connected with the concept of missingness in the feature space - a concept that is critical to interpretability research.

  20. WHAT IF TOOL - GOOGLE, notebook, walkthrough

  21. Language interpretability tool (LIT) - The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

  22. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - “trying to \textit{explain} black box models, rather than creating models that are \textit{interpretable} in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.”

  23. Using genetic algorithms

  24. Google’s what-if tool from PAIR

  25. Boruta (medium) was designed to automatically perform feature selection on a dataset using randomized features, i.e., measuring valid features against their shadow/noisy counterparts.

  26. InterpretML by Microsoft, git.

  27. Connecting Interpretability and Robustness in Decision Trees through Separation, git

  28. Interpret Transformers - explain transformers with 2 lines of code.

Lime

  1. *** how lime works behind the scenes
  2. LIME to interpret models NLP and IMAGE, github- In the experiments in our research paper, we demonstrate that both machine learning experts and lay users greatly benefit from explanations similar to Figures 5 and 6 and are able to choose which models generalize better, improve models by changing them, and get crucial insights into the models' behavior.

Anchor

  1. Anchor from the authors of Lime, - An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same.

Shap

  1. Theory:
    1. How Shap values are calculated - youtube.
    2. Cooporative game theory & Shapely values, Medium, youtube
    3. Calculating a Taxi fare using Shap
    4. Shap explained
  2. Intro to shap and lime, part 1, part 2
  3. A series on Shap, Lime.
    1. Part I: Explain Your Model with the SHAP Values
    2. Part II: The SHAP with More Elegant Charts
    3. Part III: How Is the Partial Dependent Plot Calculated?
    4. Part VI: An Explanation for eXplainable AI
    5. Part V: Explain Any Models with the SHAP Values — Use the KernelExplainer
    6. Part VI: The SHAP Values with H2O Models
    7. Part VII: Explain Your Model with LIME
    8. Part VIII: Explain Your Model with Microsoft’s InterpretML
  4. Medium Intro to lime and shap
  5. **** In depth SHAP
  6. Github
  7. Country happiness using shap
  8. Stackoverflow example, predicting tags, pandas keras etc
  9. Intro to shapely and shap
  10. Fiddler on shap
  11. Shapash
    1. shapash git - a web app (lime and shap).
    2. making models understandable by everyone - Yann Golhen
    3. using shapash for confidence on XAI. - francesco marini
      using 3 new metrics
      1. Consistency - do different explainability methods give, on average, similar explanations?
      2. Stability - for similar instances, are the explanations similar?
      3. Compacity - do fewer features drive the model?
  12. Partial Shap
    1. Which Of Your Features Are Overfitting? by Samuele Mazzanti - "Discover “ParShap”: an advanced method to detect which columns make your model underperform on new data" implemented in pingouin-stats.
  13. Shap residuals
    1. medium
  14. SHAP advanced
    1. Official shap tutorial on their plots, you can never read this too many times.

    2. What are shap values on kaggle - whatever you do start with this

    3. Shap values on kaggle #2 - continue with this

    4. How to calculate Shap values per class based on this graph\

  15. A thorough post about the many ways of explaining a model, from regression, to bayes, to trees, forests, lime, beta, feature selection/elimination
  16. Trusting models
  17. Interpret using uncertainty