IFN580 Week 10 Generative AI Flashcards
(16 cards)
What does the ‘generative’ in Generative Models refer to?
The ability to create new content
How does generative AI relate to deep learning?
Generative AI often uses deep learning models
What are Discriminative models, and how do they differ from Generative models?
Discriminative models learn decision boundaries
Generative models learn data distributions to generate new data
What are the two main components of a GAN (Generative Adversarial Network)?
Generator (G): Tries to create realistic fake data.
Discriminator (D): Tries to distinguish real from fake data.
What is the Generator in GAN
It creates fake data
What is the Discriminator in GAN
It tries to distinguish real data from fake data
How does GAN training work?
The adversarial training tries to improve both models
What is Variational Autoencoder (VAE)?
It encodes input data into a probabilistic latent space, which allows it to generate new samples
How does the VAE process work?
Encoder: maps input to a probability distribution
Latent Space: introduces stochasticity for generation
Decoder: reconstructs input from latent space
How do generative models work?
they learn patterns through training data. when given a text, they predict what comes next
What are Tokens in transformer models?
Chunks of words/pixel that has been broken down from a larger input (e.g. sentence).
What is Single-head vs Multi-head attention in Transformers?
Single-head: One attention mechanism.
Multi-head: Multiple attention layers in parallel, each focusing on different parts of the sequence.
What are the 3 types of Transformer architectures and their usage?
Encoder-only (e.g., BERT): Best for understanding tasks like classification.
Decoder-only (e.g., GPT): Best for text generation.
Encoder–Decoder (e.g., T5, BART): Best for translation or text summarization.
Generative AI uses a statistical model to predict a ______________ for a prompt
response
GANs use samples with ___ as an input to the generator.
Noise
How do variational autoencoders (VAEs) differ to traditional autoencoders?
They generate new samples