NLP Flashcards

(28 cards)

1
Q

What is the goal of Natural Language Processing (NLP)?

A

To enable machines to understand, generate, and interact using human language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What makes language understanding difficult for machines?

A

It requires context, cultural knowledge, and interpretation of non-literal meaning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an idiom?

A

A phrase where the meaning is not derived from the literal meanings of the words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How did Plato view the origin of meaning?

A

As emerging from abstract, ideal forms and logical rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How did Aristotle view the origin of meaning?

A

As grounded in real-world experience and empirical observation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is symbolic NLP?

A

An approach based on hand-crafted rules and grammar logic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What replaced symbolic NLP in the 1990s?

A

Statistical NLP using corpus-based models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What major shift happened in NLP post-2010?

A

The rise of neural network-based models and deep learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why did traditional NLP fail with sarcasm and idioms?

A

Because it relied too heavily on syntax and ignored context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the pre-train → fine-tune paradigm?

A

A two-step process where models are first trained on general tasks, then adapted to specific tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What kind of tasks are used for pre-training language models?

A

Masked language modeling or next-word prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is fine-tuning in NLP?

A

Adapting a pre-trained model to a specific downstream task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What can probing reveal about pre-trained models like BERT?

A

They implicitly learn grammar, syntax, and structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is hallucination in LLMs?

A

When a model generates false or fabricated content that sounds plausible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do LLMs hallucinate?

A

They generate outputs based on statistical patterns, not factual memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is one way to reduce hallucination in LLMs?

A

Use Retrieval-Augmented Generation (RAG) or external grounding tools.

17
Q

What is emergence in large language models?

A

The appearance of new abilities when models reach a certain size or scale.

18
Q

What are examples of emergent abilities in LLMs?

A

Arithmetic, reasoning, and chain-of-thought inference.

19
Q

What is the difference between form and function in language?

A

Form refers to syntax and grammar, while function refers to real-world meaning and use.

20
Q

How do LLMs trained on form exhibit functional understanding?

A

By generalising patterns in usage and context without explicit programming.

21
Q

What is zero-shot learning in the context of LLMs?

A

Performing a new task without having seen explicit examples during training.

22
Q

What does ‘chain-of-thought prompting’ help with?

A

Breaking down reasoning into smaller, interpretable steps.

23
Q

What is self-consistency in prompting?

A

Generating multiple answers and choosing the most consistent or frequent one.

24
Q

What does least-to-most prompting involve?

A

Decomposing a problem from easiest to hardest parts.

25
What is the 'program-of-thought' approach?
Combining logical reasoning with structured code-like output.
26
What is the goal of reasoning-based LLMs like DeepSeek R1?
To generate and evaluate multiple reasoning chains for robust conclusions.
27
How do LLMs make predictions?
By predicting the next word in a sequence based on context.
28
Why can't LLMs store and recall facts like databases?
They don't have persistent memory; they rely on generalised patterns.