3) Responsible Artificial Intelligence Practices (P1) Flashcards
(36 cards)
What is responsible AI?
Responsible AI refers to practices and principles that ensure that AI systems are transparent and trustworthy while mitigating potential risks and negative outcomes.
Responsible AI emphasizes accountability and ethical considerations in AI development and deployment.
What is the number one problem that developers face in AI applications?
Accuracy
Addressing issues of bias and variance is critical to improving accuracy in AI models.
Define bias in the context of AI models.
Bias in a model means that the model is missing important features of the datasets, resulting in overly simplistic data representation.
Bias is assessed by the difference between expected predictions and true values.
What does high bias indicate about a model?
When a model has high bias, it is underfitted, meaning it does not capture enough variation in the data features.
Underfitting leads to poor performance on training data.
Define variance in the context of AI models.
Variance refers to the model’s sensitivity to fluctuations or noise in the training data.
High variance can lead to overfitting, where a model performs well on training data but poorly on new data.
What is overfitting?
Overfitting occurs when a model performs well on training data but fails to generalize to unseen examples.
It happens when the model memorizes the training data rather than learning general patterns.
List strategies to overcome bias and variance errors.
- Cross-validation
- Increase data
- Regularization
- Simpler models
- Dimension Reduction
- Stop training early
These strategies help improve model robustness and accuracy.
What are some challenges of generative AI?
- Toxicity
- Hallucinations
- Intellectual property
- Plagiarism and cheating
- Disruption of the nature of the work
These challenges highlight ethical concerns and the impact of generative AI on society.
What are the core dimensions of responsible AI?
- Fairness
- Explainability
- Privacy and security
- Veracity and robustness
- Governance
- Transparency
- Safety
- Controllability
Each dimension addresses specific ethical and operational aspects of AI systems.
Define fairness in AI systems.
Fairness in AI systems promotes inclusion, prevents discrimination, upholds responsible values and legal norms, and builds trust with society.
Fairness is essential for the ethical deployment of AI technologies.
What does explainability in AI refer to?
Explainability refers to the ability of an AI system to clearly explain or provide justification for its internal mechanisms and decisions.
This is crucial for user trust and understanding.
What is meant by privacy and security in responsible AI?
Privacy and security ensure that users can trust their data is not compromised or used without authorization.
Protecting user data is a fundamental ethical requirement.
What do veracity and robustness in AI involve?
Veracity and robustness refer to mechanisms that ensure an AI system operates reliably, even in unexpected situations, uncertainty, and errors.
These qualities contribute to the reliability of AI systems.
What is governance in the context of responsible AI?
Governance is a set of processes used to define, implement, and enforce responsible AI practices within an organization.
Effective governance is essential for accountability in AI.
Define transparency in responsible AI.
Transparency provides individuals, organizations, and stakeholders access to assess the fairness, robustness, and explainability of AI systems.
Transparency is crucial for building trust in AI technologies.
What does safety in responsible AI refer to?
Safety refers to the development of algorithms, models, and systems that are responsible, safe, and beneficial for individuals and society.
Ensuring safety is a key aspect of ethical AI development.
What is controllability in responsible AI?
Controllability refers to the ability to monitor and guide an AI system’s behavior to align with human values and intent.
This ensures that AI systems act in ways that are consistent with societal norms.
List business benefits of responsible AI.
- Increased Trust and reputation
- Regulatory Compliance
- Mitigating Risks
- Competitive advantage
- Improved Decision Making
- Improved products and business
Implementing responsible AI can lead to significant strategic advantages for organizations.
Model evaluation on Amazon Bedrock
you can evaluate, compare, and select the best foundation model for your use case,offers a choice of automatic evaluation and human evaluation.
Model evaluation on SageMaker Clarify
You can automatically evaluate FMs for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative.
Safeguards for generative AI
With Amazon Bedrock Guardrails, you can implement safeguards for your generative AI applications based on your use cases and responsible AI policies.
Bias detection: SageMaker Clarify
helps identify potential bias in machine learning models and datasets without the need for extensive coding.
You specify input features, such as gender or age, and SageMaker Clarify runs an analysis job to detect potential bias in those features.
Bias detection: SageMaker Data Wrangler
balance your data in cases of any imbalances.
Offers three balancing operators: random undersampling, random oversampling, and Synthetic Minority Oversampling Technique (SMOTE).