Domain 2: Gen AI Fundamentals 24% Flashcards
(93 cards)
____ is a subset of deep learning. Like deep learning, this is a multipurpose technology that helps to generate new original content rather than finding or classifying existing content.
Generative AI
_____ looks for statistical patterns in modalities, such as natural language and images.
Gen AI foundational models
_____ are very large and complex neural network models with billions of parameters that are learned during the training phase or pre-training.
Gen AI foundational models
The more parameters a model has, the more _____ it has, so the model can perform more advanced tasks.
memory
Gen AI models are built with _____, _____, _____,and_____ all working together.
neural networks, system resources, data, and prompts
The current core element of generative AI is the _____.
transformer network
_____ are pre-trained on massive amounts of the text data from the internet, and they can use this pre-training process to build up a broad knowledge base.
Large Language Models (LLMs)
A ____is a natural language text that requests the generative AI to perform a specific task.
prompt
The process of reducing the size of one model (known as the teacher) into a smaller model (known as the student) that emulates the original model’s predictions as faithfully as possible.
distillation
A prompt that contains more than one example demonstrating how the large language model should respond.
few-shot prompting
A second, task-specific training pass performed on a pre-trained model to refine its parameters for a specific use case.
fine tuning
A form of fine-tuning that improves a generative AI model’s ability to follow instructions, which involves training a model on a series of instruction prompts, typically covering a wide variety of tasks. The resulting model generates useful responses to zero-shot prompts across a variety of tasks.
instruction tuning
An algorithm for performing parameter efficient tuning that fine-tunes only a subset of a large language model’s parameters.
Low-Rank Adaptability (LoRA)
A system that picks the ideal model for a specific inference query.
model cascading
The algorithm that determines the ideal model for inference in model cascading. It is typically a machine learning model that gradually learns how to pick the best model for a given input, and it could sometimes be a simpler, non-machine learning algorithm.
model router
A prompt that contains one example demonstrating how the large language model should respond. For example, the following prompt contains one example showing a large language model how it should answer a query.
one-shot prompting
A set of techniques to fine-tune a large pre-trained language model (PLM) more efficiently than full fine-tuning. It typically fine-tunes far fewer parameters than full fine-tuning, yet generally produces a large language model that performs as well (or almost as well) as a large language model built from full fine-tuning.
parameter-efficient tuning
Models or model components (such as an embedding vector) that have already been trained. Sometimes, you’ll feed pre-trained embedding vectors into a neural network. Other times, your model will train the embedding vectors themselves rather than rely on the pre-trained embeddings.
pre-trained model
The initial training of a model on a large dataset. Some models are clumsy giants and must typically be refined through additional training.
pre-training
Any text entered as input to a large language model to condition the model to behave in a certain way. These can be as short as a phrase or arbitrarily long (for example, the entire text of a novel).
prompt
A capability of certain models that enables them to adapt their behavior in response to arbitrary text input (prompts). In the paradigm, a large language model responds to a prompt by generating text.
prompt-based learning
The art of creating prompts that elicit the desired responses from a large language model. Humans perform. Writing well-structured prompts is an essential part of ensuring useful responses from a large language model.
prompt engineering
Using feedback from human raters to improve the quality of a model’s responses. The system can then adjust its future responses based on that feedback.
Reinforcement Learning from Human Feedback (RLHF)
An optional part of a prompt that identifies a target audience for a generative AI model’s response. Without it, a large language model provides an answer that may or may not be useful for the person asking the questions. With it, a large language model can answer in a way that’s more appropriate and more helpful for a specific target audience.
role prompting