CAIC 1 CH-5-8 part 1 Flashcards
(33 cards)
What is the purpose of LLM observability?
LLM observability helps in detecting and tracking biased responses, enabling fine-tuning to ensure fair and unique content generation, fostering end-user trust.
What is hallucination in the context of LLMs?
Hallucination refers to a known challenge where an AI model generates inaccurate predictions or misleading responses.
How can continuous tracking of model output improve accuracy?
Continuous tracking allows for scoring generated content based on prompt engineering, leading to improved accuracy.
What does the observability layer calculate?
The observability layer calculates key performance metrics to assess cluster performance and optimize LLM performance and response times.
What is model drift?
Model drift is the decline in performance of LLMs due to usage and shifts in input data distribution.
How can organizations detect model drift?
Organizations can detect model drift by consistently tracking performance metrics, output quality, and user input.
What are important considerations when selecting a language model?
Considerations include hosting infrastructure, data privacy, use cases, and risk tolerance.
What are the options for hosting language models?
Options include fully closed models within isolated infrastructure or partnering with enterprise-grade specialized LLM providers.
What is the significance of prompt design?
Well-defined prompts help the model generate focused and relevant responses.
What is zero-shot learning?
Zero-shot learning is when a model performs a task without having seen any training examples, relying on prior knowledge.
What is few-shot learning?
Few-shot learning involves providing the model with a small number of examples to generalize from for new tasks.
What should be avoided when designing prompts for ChatGPT?
Avoid providing too much information, as it could reduce the accuracy of the response.
What is the role of the Moderator API in ChatGPT?
The Moderator API prevents ChatGPT from engaging in unsafe conversations by classifying content based on various harmful categories.
What is the importance of ethical principles in AI?
Ensuring that AI outputs are in line with ethical principles helps avoid bias and promotes responsible AI usage.
What are the key steps in an ML lifecycle?
Key steps include identification and verification of ML techniques, system architecture design, and ML platform automation technical design.
What is the purpose of model validation?
Model validation assesses how the model performs on unseen data and determines appropriate metrics for different ML problems.
What is MLOps?
MLOps refers to the multi-step workflow automation needed for data processing, model training, and deployment.
What is required for ML platforms regarding model hosting?
ML platforms need to provide the technical capability to host and serve the model for prediction generations, for real-time, batch, or both.
How should trained ML models be managed?
Trained ML models need to be managed and tracked for easy access and lookup, with relevant metadata.
What features need to be managed for ML model training and serving?
Common and reusable features need to be managed and served for model training and model serving purposes.
What is MLOps?
MLOps refers to the multi-step workflow in machine learning that needs to be automated, including data processing, model training, model validation, and model hosting.
What are key components of workflow automation in ML?
Key components include the ability to create different automation pipelines for various tasks, such as model training and model hosting.
What security mechanisms should an ML platform provide?
The ML platform needs to provide authentication and authorization mechanisms to manage access to the platform and different resources and services.
What network security controls should an ML platform be configured for?
The ML platform should be configured for network security controls such as a firewall and an IP address access allowlist to prevent unauthorized access.