Generative AI Flashcards

This deck is to help to learn the concepts related to Chat GPT, Bard and other Generative AI topics.

1
Q

What does RLHF stand for?

A

Reinforcement Learning with Human Feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does LLM stand for?

A

Large Language Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between Base LLM and Instruction Tuned LLM?

A

Base LLM predicts next word based on text training data. But Instruction Tuned LLM tries to follow instructions, it gets better with RLHF and is supposed to try to be Helpful, Honest and Harmless.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Google AI Language Model?

A

Google AI Language Model is a large language model (LLM) that is trained on a massive dataset of text and code. It can be used for a variety of tasks, including generating text, translating languages, and writing different kinds of creative content.

  • Google AI Language Model is a powerful tool that can be used to speed up coding.
  • Google AI Language Model is still under development, but it has learned to perform many kinds of tasks, including:
    • Generating text
    • Translating languages
    • Writing different kinds of creative content
    • Answering questions in an informative way
  • Google AI Language Model is a valuable tool for developers, writers, and anyone who wants to be more productive.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Natural language processing (NLP)

A

NLP is a field of computer science that deals with the interaction between computers and human (natural) languages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Machine learning (ML)

A

ML is a field of computer science that gives computers the ability to learn without being explicitly programmed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Deep learning (DL)

A

DL is a subset of ML that uses artificial neural networks to learn from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Transformer

A

The transformer is an algorithm that is used to learn long-range dependencies in text data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

BERT

A

BERT is a pre-trained language model that can be used for a variety of NLP tasks, including text classification, question answering, and natural language inference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GloVe

A

GloVe is a word embedding model that can be used to represent words as vectors.

GloVe stands for Global Vectors for Word Representation. It is a word embedding model that is trained on a massive corpus of text. GloVe word embeddings are vectors that represent the meaning of words. These vectors can be used for a variety of NLP tasks, such as:

  • Text classification: GloVe word embeddings can be used to train text classification models. For example, GloVe word embeddings can be used to train a model to classify text as spam or not spam.
  • Question answering: GloVe word embeddings can be used to train question answering models. For example, GloVe word embeddings can be used to train a model to answer questions about a given piece of text.
  • Natural language inference: GloVe word embeddings can be used to train natural language inference models. For example, GloVe word embeddings can be used to train a model to determine whether a sentence is true or false.

GloVe word embeddings have been shown to be effective for a variety of NLP tasks. They are a popular choice for NLP tasks because they are easy to use and they have been shown to be effective.

Here are some of the advantages of using GloVe word embeddings:

  • Efficiency: GloVe word embeddings are efficient to use. They can be used to train NLP models quickly and easily.
  • Accuracy: GloVe word embeddings have been shown to be accurate for a variety of NLP tasks. They have been shown to be better than other word embedding models, such as Word2Vec.
  • Scalability: GloVe word embeddings can be scaled to large corpora. This means that they can be used to train NLP models on large amounts of data.

Here are some of the disadvantages of using GloVe word embeddings:

  • Computational complexity: GloVe word embeddings are computationally expensive to train. They require a large corpus of text and a lot of computing power.
  • Data sparsity: GloVe word embeddings can be sparse. This means that some words may not have any neighbors in the embedding space. This can make it difficult to train NLP models on GloVe word embeddings.
  • Parameter tuning: GloVe word embeddings require parameter tuning. This means that the hyperparameters of the model need to be adjusted to get the best results. This can be time-consuming and difficult.

Overall, GloVe word embeddings are a powerful tool for NLP tasks. They are efficient, accurate, and scalable. However, they are also computationally expensive and data sparse. If you are looking for a word embedding model that is easy to use and effective, GloVe is a good choice. However, if you are working with large datasets or need to train NLP models on rare words, you may want to consider other word embedding models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are long-range dependencies?

A

Long-range dependencies in the context of transformers refer to the ability of a transformer to learn relationships between words that are far apart in a sentence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why are long-range dependencies important?

A

Long-range dependencies are important for many natural language processing tasks, such as machine translation and text summarization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do transformers learn long-range dependencies?

A

Transformers learn long-range dependencies by using self-attention. Self-attention is a mechanism that allows a transformer to attend to any word in a sentence, regardless of its position.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are Variational autoencoders?

A

Variational autoencoders (VAEs) are a type of generative model that learns to represent data in a latent space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between Narrow AI and General AI?

A

An important distinction in the field of AI is between “narrow AI” and “general AI”. Narrow AI is defined as “a machine-based system designed to address a specific problem (such as playing Go or chess)” (Kiron 2017). By contrast, general AI refers to machines with the ability to solve many different types of problems on their own, like humans can. To date, all applications of AI are examples of narrow AI. Although general AI is currently a hot research topic, it is still likely decades away from true realization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly