L1 Flashcards
(104 cards)
Overfitting - fix
Hyper parameter tuning
Get started with LLMs and GenAI
Amazon Bedrock
Amazon SageMaker JumpStart
Prevent confidential information in GenAI responses
Amazon Bedrock Guardrails
maximum amount of text in LLM input
context window
monitor and track the performance and usage of ML models
Amazon SageMaker Model Dashboard
AWS service to detect potential bias via explain ability
Amazon SageMaker Clarify
evaluate, compare, and select Foundation Models (FMs) quickly
Amazon SageMaker JumpStart
human-in-the-loop to create training data
Amazon SageMaker Ground Truth
image and video analysis for facial and object recognition
Amazon Rekognition
Service to build, train, and deploy ML models, and customize for your needs
Amazon SageMaker
extracts text from handwriting and scanned documents
Amazon Textract
service to build and deploy AI applications with access to foundation LLM models.
Amazon Bedrock
NLP service to uncover intent, sentiment and relationships
Amazon Comprehend
time-series forecasts
Amazon Forecast
pay for actual usage, without an upfront payment or long-term contract.
on-demand pricing
reserves a predictable capacity in advance for a discounted rate
provisioned throughput
spare EC2 capacity at reduced rates
spot instances
lower rates for a long-term commitment
reserved instances
prompt to avoid certain outputs or behaviors
Negative prompting
prompting to generate content without having seen any examples, relying solely on its general understanding
Zero-shot Prompting
prompt providing a few examples of a task
Few-shot Prompting
prompt a complex question as a series of intermediate steps
Chain-of-thought prompting
Foundation Models - supervised or self-supervised learning
creation - self-supervised learning
fine-tuning - supervised learning
The hyperparameter to control the creativity / randomness of LLM responses
Temperature - higher temperature = higher creativity