Amazon Bedrock - Fine Tuning models Flashcards
(14 cards)
What is fine-tuning in the context of Amazon Bedrock?
Fine-tuning is the process of adapting a copy of a foundation model by adding your own data, which changes the underlying weights of the base model.
Fine-tuning allows for customization of models like LLAMA 2 with specific data from Amazon S3.
What type of data is required for fine-tuning a model?
Training data that adheres to a specific format and is stored in Amazon S3.
This data can include labeled examples for instruction-based fine-tuning.
What is the pricing model required to use a fine-tuned custom model?
Provisioned throughput.
This is different from the on-demand pricing model.
Are all models capable of being fine-tuned?
No, not all models can be fine-tuned; usually, only open-source models can be.
This limitation is important to consider when selecting models for fine-tuning.
What is instruction-based fine-tuning?
A method to improve the performance of a pre-trained foundation model on domain-specific tasks using labeled examples and prompt-response pairs.
Examples include using prompts like ‘Who is Stephane Maarek?’ with corresponding detailed responses.
What type of data is needed for continued pre-training?
Unlabeled data.
This type of fine-tuning is also known as domain-adaptation fine-tuning.
What is an example of continued pre-training?
Feeding the entire AWS documentation to a model to make it an expert on AWS.
This process involves providing large amounts of information without labeled outputs.
What are single-turn and multi-turn messaging in fine-tuning?
Single-turn messaging involves a hint for a user and assistant interaction, while multi-turn messaging involves multiple exchanges in a conversation.
These messaging types help the model understand dialogue context better.
Why is instruction-based fine-tuning generally cheaper than continued pre-training?
Instruction-based fine-tuning requires less data and less intense computations.
This makes it a more economical choice for specific adjustments.
What is transfer learning?
The concept of using a pre-trained model to adapt it to a new related task.
Transfer learning includes fine-tuning as a specific case.
What are common use cases for fine-tuning?
- Designing chatbots with specific personas
- Updating models with exclusive data
- Targeted use cases like categorization or assessing accuracy
- Crafting advertisements
- Training with historical data
These use cases highlight the practical applications of fine-tuning in various domains.
What distinguishes fine-tuning from general transfer learning?
Fine-tuning is a specific type of transfer learning focused on adapting a model to a new task using labeled or unlabeled data.
Understanding this distinction is crucial for exam questions regarding machine learning concepts.
What are prompt-response pairs in instruction-based fine-tuning?
They are examples of how a model should respond to specific prompts, providing context and expected output.
These help guide the model’s responses during training.
What is the impact of using provisioned throughput for fine-tuned models?
It increases the cost of using the fine-tuned model compared to using on-demand pricing.
This financial aspect is important when planning for model deployment.