AI Challenges and Responsibilities Flashcards
What is explainability in Responsible AI?
The understanding of the nature and behavior of an ML model, explaining outputs without knowing internal workings.
What does interpretability mean in Responsible AI?
A human can understand the cause of a model’s decision, answering ‘why’ and ‘how’.
What does privacy and security mean in Responsible AI?
Ensuring individuals control when and if their data is used by models.
What is transparency in Responsible AI?
Being open about how AI models work and how decisions are made.
What is veracity and robustness in Responsible AI?
The system remains reliable even in unexpected or adverse conditions.
What is AI governance?
Policies, processes, and tools that ensure AI is developed and used responsibly.
What is the aim of safety in Responsible AI?
To ensure algorithms are safe and beneficial to individuals and society.
What does controllability in Responsible AI refer to?
Aligning AI models to human values and intent.
What is Amazon Bedrock Guardrails used for?
Filtering content, redacting PII, enhancing safety, and blocking harmful content.
What can SageMaker Clarify evaluate?
Accuracy, robustness, toxicity, and bias in foundational models.
How does Data Wrangler help with bias?
Using Augment Data to balance datasets by generating new instances for underrepresented groups.
What does SageMaker Model Monitor do?
Performs quality analysis on models in production.
What is Amazon A2I (Augmented AI) used for?
Allows human review of ML predictions with low confidence.
What does SageMaker Role Manager help with?
Implements user-level security for model governance.
What is the purpose of Model Cards in AWS?
To document models including use cases, limitations, and metrics.
What are AWS AI Service Cards?
Responsible AI documentation for AWS services with use cases, limitations, and design choices.
What is a high-interpretability model example?
A decision tree – easy to interpret and visualize.
What is a trade-off in model interpretability and performance?
Higher interpretability often means lower performance and vice versa.
What is Human-Centered Design (HCD) in Responsible AI?
Designing AI systems to prioritize human needs.
What does amplified decision making focus on in HCD?
Designing for clarity, simplicity, and usability in high-pressure decisions.
What is unbiased decision making in HCD?
Recognizing and mitigating bias in datasets and decision processes.
What is cognitive apprenticeship in HCD?
AI learns from human experts (e.g., RLHF), and humans learn from AI with personalization.
What is user-centered design in Responsible AI?
Ensuring a wide range of users can access and benefit from AI systems.
What makes it difficult to regulate Generative AI?
Its complexity, black-box nature, and non-deterministic outputs make regulation challenging.