ML part 6 Flashcards
(20 cards)
What is model interpretability?
The ability to understand and explain how a model makes predictions.
What is SHAP?
A method for explaining model predictions using Shapley values from game theory.
What is feature importance?
A score that reflects how useful or valuable each feature was in building the model.
What is permutation importance?
A technique that measures the increase in error when a feature is shuffled.
Why is interpretability important?
To build trust, diagnose issues, and comply with regulations.
What is model deployment?
The process of integrating a trained model into a production environment.
What is model inference?
The process of making predictions using a trained model on new data.
What is model versioning?
Tracking and managing different versions of trained models.
What is model monitoring?
Checking a model’s performance and behavior in production over time.
What is concept drift?
When the statistical properties of target variables change over time, degrading model performance.
What is a machine learning pipeline?
A set of automated steps for preprocessing, training, and evaluation.
What is a transformer in a pipeline?
An object that transforms data, like a scaler or encoder.
What is a pipeline object in scikit-learn?
A tool to chain preprocessing and modeling steps together.
Why use pipelines?
To ensure consistency, reproducibility, and cleaner code.
What does ‘fit_transform()’ do?
Fits a transformer to data and then transforms it.
What is fairness in ML?
Ensuring model decisions do not systematically disadvantage any group.
What is algorithmic bias?
Bias arising from the design or training data of a model.
What is transparency in machine learning?
Making the model’s behavior and decisions understandable.
Why is accountability important in ML?
So model builders are responsible for model impact and misuse.
What is the trade-off between fairness and accuracy?
Improving fairness may reduce raw accuracy and vice versa.