NLP Flashcards
(28 cards)
What is the goal of Natural Language Processing (NLP)?
To enable machines to understand, generate, and interact using human language.
What makes language understanding difficult for machines?
It requires context, cultural knowledge, and interpretation of non-literal meaning.
What is an idiom?
A phrase where the meaning is not derived from the literal meanings of the words.
How did Plato view the origin of meaning?
As emerging from abstract, ideal forms and logical rules.
How did Aristotle view the origin of meaning?
As grounded in real-world experience and empirical observation.
What is symbolic NLP?
An approach based on hand-crafted rules and grammar logic.
What replaced symbolic NLP in the 1990s?
Statistical NLP using corpus-based models.
What major shift happened in NLP post-2010?
The rise of neural network-based models and deep learning.
Why did traditional NLP fail with sarcasm and idioms?
Because it relied too heavily on syntax and ignored context.
What is the pre-train → fine-tune paradigm?
A two-step process where models are first trained on general tasks, then adapted to specific tasks.
What kind of tasks are used for pre-training language models?
Masked language modeling or next-word prediction.
What is fine-tuning in NLP?
Adapting a pre-trained model to a specific downstream task.
What can probing reveal about pre-trained models like BERT?
They implicitly learn grammar, syntax, and structure.
What is hallucination in LLMs?
When a model generates false or fabricated content that sounds plausible.
Why do LLMs hallucinate?
They generate outputs based on statistical patterns, not factual memory.
What is one way to reduce hallucination in LLMs?
Use Retrieval-Augmented Generation (RAG) or external grounding tools.
What is emergence in large language models?
The appearance of new abilities when models reach a certain size or scale.
What are examples of emergent abilities in LLMs?
Arithmetic, reasoning, and chain-of-thought inference.
What is the difference between form and function in language?
Form refers to syntax and grammar, while function refers to real-world meaning and use.
How do LLMs trained on form exhibit functional understanding?
By generalising patterns in usage and context without explicit programming.
What is zero-shot learning in the context of LLMs?
Performing a new task without having seen explicit examples during training.
What does ‘chain-of-thought prompting’ help with?
Breaking down reasoning into smaller, interpretable steps.
What is self-consistency in prompting?
Generating multiple answers and choosing the most consistent or frequent one.
What does least-to-most prompting involve?
Decomposing a problem from easiest to hardest parts.