SHAP values-Kaggle Flashcards

1
Q

What do SHAP values show?

A
  • SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature.
  • Shap values show how much a given feature changed our prediction (compared to if we made that prediction at some baseline value of that feature).

A model says a bank shouldn’t loan someone money, and the bank is legally required to explain the basis for each loan rejection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

sum(SHAP values for all features) = …

A

prediction for the instance - prediction for baseline values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do different types of SHAP Explainers mean e.g. DeepExplainer, KernelExplainer, TreeExplainer?

A

shap.DeepExplainer works with Deep Learning models.
shap.KernelExplainer works with all models, though it is slower than other Explainers and it offers an approximation rather than exact Shap values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Calculating SHAP values can be slow. It isn’t a problem here, because this dataset is small. But you’ll want to be careful when running these to plot with reasonably sized datasets. The exception is when using an ____ model, which SHAP has some optimizations for and which is thus much faster.

A

xgboost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly