9. Model Explainability on Vertex AI Flashcards

1
Q

What is explainability?

A

Explainability is the extent you can explain the internal mechanics of an ML system in human terms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two types of explainability?

A

Global: Make the overall ML model transparent and comprehensive
Local: Explain the model’s individual predictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is explainability important?

A

It makes customers comfortable with model predictions. It can also help debugging and improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are interpretability and explainability?

A

Interpretability: how accurately a machine learning model can associate a cause to an effect.
Explainability: Explain the ability of the parameters hidden in dnn to justify the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is feature importance?

A

It indicates how valuable the feature is relative to other features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the uses of feature importance?

A

Variable selection and data leakage check

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What can Vertex Explainable AI do?

A

It integrates feature attributions into Vertex AI and helps understand model’s outputs.
It tells how much each feature in the data contributed to the predicted result.
It can be used to identify bias and understand how to improve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the models supported by Vertex Explainable AI?

A

AutoML image models (classification)
AutoML tabular models (classification and regression)
Custom-trained TensorFlow model (tabular and image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is feature attribution?

A

Feature attributions indicate how much each feature in your model contributed to the predictions for each given instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the three methods offered by Vertex Explainable AI?

A

Sampled Shapley: Tabular classification and regression.
Non-differentiable models (ensembles of trees and neural networks with encoding and rounding tasks)
AutoML, Custom-trained (any container)
Integrated gradients: Tabular classification and regression.
Image classification.
Differentiable models (neural networks)
AutoML, Custom-trained TF model (prebuilt container)
XRAI: Image classification
AutoML, Custom-trained TF model (pre-built container)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are differentiable and non-differentiable models?

A

Differentiable models: You can calculate the derivative of all the operations in your TensorFlow graph.
Non-differentiable models: Includes non-differentiable operations in the TensorFlow graph, e.g., decoding and rounding tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Vertex AI Example-Based explanation for?

A

It is used for misclassification analysis and can enable active learning so that data can be selectively labeled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are data bias and fairness?

A

Data bias: When certain parts of the data are not collected.
Fairness: Ensure biases in the data do not lead to treating individuals unfavourably.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to detect bias and fairness?

A

Explainable AI feature attributions
Feature overview functionality through an interactive dashboard with What-If Tool.
Language Interpretability Tool for NLP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the two concepts of ML solution readiness?

A

Responsible AI
Model governance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the responsible AI tools?

A

Explainable AI
Model cards:
what a model does,
intended audience,
maintenance staff
TensorFlow open source toolkit: provide model transparency.

15
Q

What is model governance?

A

Provide guidelines and processes to help employees implement the company’s AI principles.

16
Q

What are the ways to achieve model governance?

A

Human to review model outputs
Responsibility assignment matrix for each model by task
Maintain model cards to track model versioning and data lineage
Evaluate the model on benchmark datasets and validate the model against fairness indicators
Use What-If tools to understand the importance of different data features

17
Q

How to set up explanations in Vertex AI?

A

Configure explanations for custom-trained models.
No configuration is needed for AutoML tabular regression and classification.

17
Q

What are the explanations available?

A

Online: Synchronous requests to Vertex AI API
Batch: Asynchronous requests to Vertex AI API
Local kernel: Use User-Managed Vertex AI Workbench notebook to get explanations without deploying the model.

18
Q

How do you get explanations when you use TensorFlow?

A

Use Explainable AI SDK’s save_model_with_metadata() to infer your model’s inputs and outputs and save this explanation metadata with your model.
Load the model into the Explainable AI SDK using load_model_from_local_path()
Call explain() with instances of data and visualize the feature attributions