LLM Concepts Flashcards

(49 cards)

1
Q

What is a Large Language Model (LLM)?

A

A deep learning model trained on large corpora of text to understand and generate human language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What architecture do most LLMs use?

A

The transformer architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the transformer model?

A

A model that uses self-attention mechanisms to process sequences in parallel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is self-attention?

A

A mechanism where a model learns which parts of a sequence are relevant to each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is positional encoding in transformers?

A

Information added to input tokens to preserve word order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a token?

A

A unit of text, often a word or subword, processed by the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is tokenization?

A

The process of converting text into tokens.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is pretraining in LLMs?

A

Training the model on large unlabeled text data to learn general language patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is fine-tuning?

A

Adapting a pretrained model to a specific task with additional labeled data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is masked language modeling?

A

A training task where some input tokens are hidden and the model must predict them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is causal language modeling?

A

A training task where the model predicts the next token in a sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is GPT?

A

Generative Pretrained Transformer — a causal language model trained to predict the next word.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is BERT?

A

Bidirectional Encoder Representations from Transformers — a masked language model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is GPT different from BERT?

A

GPT is unidirectional and generative; BERT is bidirectional and better for understanding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is zero-shot learning?

A

Making predictions without seeing task-specific examples during training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is few-shot learning?

A

Learning a task with only a few examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is instruction tuning?

A

Training LLMs to follow instructions in natural language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is prompt engineering?

A

The craft of designing effective input prompts to guide LLM behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a system prompt?

A

A special prompt that guides the behavior of an LLM session.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is context window?

A

The maximum number of tokens an LLM can process at once.

21
Q

What is attention mechanism?

A

A method that lets models focus on different parts of the input when making predictions.

22
Q

What is temperature in text generation?

A

A parameter that controls randomness — higher values yield more diverse outputs.

23
Q

What is top-k sampling?

A

A decoding method that samples from the top k most likely next tokens.

24
Q

What is top-p (nucleus) sampling?

A

A method that samples from the smallest set of tokens with a cumulative probability > p.

25
What is beam search?
A decoding strategy that keeps multiple candidate sequences at each step.
26
What is hallucination in LLMs?
When a model generates text that is fluent but factually incorrect.
27
What is RLHF?
Reinforcement Learning from Human Feedback — used to align LLMs with human preferences.
28
What is a language model's perplexity?
A measure of how well the model predicts a sample — lower is better.
29
What is an embedding?
A numeric representation of text that captures semantic meaning.
30
What is a vector database?
A database designed to store and search embeddings efficiently.
31
What is retrieval-augmented generation (RAG)?
A technique that combines external knowledge retrieval with generation.
32
What is chain-of-thought prompting?
A method where the model is encouraged to explain its reasoning before answering.
33
What is a decoder-only transformer?
A transformer model that generates output sequentially, like GPT.
34
What is an encoder-only transformer?
A transformer that creates contextual embeddings for input, like BERT.
35
What is an encoder-decoder transformer?
A model architecture used for translation or summarization, like T5.
36
What is T5?
Text-to-Text Transfer Transformer — an encoder-decoder model that treats all tasks as text-to-text.
37
What is parameter count in LLMs?
The number of learnable weights in the model — indicates model size.
38
Why do LLMs need large datasets?
To capture a wide range of language patterns and knowledge.
39
What is data contamination in training?
When test data is accidentally included in training data.
40
What is a safety filter in LLMs?
A mechanism to block harmful or inappropriate outputs.
41
What is the alignment problem?
Ensuring that AI systems behave in accordance with human intent.
42
What is a language model's vocabulary?
The set of tokens it can recognize and generate.
43
What is model distillation?
Compressing a large model into a smaller one that approximates its behavior.
44
What is quantization?
Reducing the precision of model weights to decrease memory usage.
45
What is model pruning?
Removing unnecessary weights or neurons to simplify the model.
46
What is latency in LLMs?
The time it takes to generate a response after receiving input.
47
What is inference in LLMs?
The process of generating predictions using a trained model.
48
What is an API endpoint in LLMs?
A service interface to interact with the model programmatically.
49
What is multi-modal LLM?
A model that processes and generates multiple data types, such as text and images.