Variational Autoencoders (VAEs) Flashcards

1
Q

Variational Autoencoders (VAEs)

A

Variational Autoencoders (VAEs) are a particular type of generative model that have been particularly influential in machine learning. They are fundamentally autoencoders, but with added constraints on the encoded representations being learned. In summary, VAEs are a powerful tool in the machine learning toolbox, combining principles of traditional autoencoders with probabilistic modeling and offering a robust and flexible framework for learning to represent complex data distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Definition
A

Variational Autoencoders (VAEs) are a type of autoencoder, a neural network trained to encode data from a high-dimensional input into a lower-dimensional representation. What makes VAEs unique is their use of probabilistic encoding and decoding, which provides a principled framework for learning distributions of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. Architecture
A

The architecture of a VAE, like a traditional autoencoder, is composed of two main parts: an encoder and a decoder. The encoder compresses the input into a latent space representation, and the decoder reconstructs the input from this representation. However, in a VAE, the encoder outputs parameters of a probability distribution instead of a fixed value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Probabilistic Encoding
A

Instead of encoding an input as a single point, VAEs encode inputs as distributions over the latent space. The encoder outputs the parameters (mean and standard deviation) of this distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. Reparameterization Trick
A

To enable backpropagation for learning, VAEs use the “reparameterization trick”. This involves sampling an epsilon from a standard normal distribution, and then shifting the sample by the mean and scaling it by the standard deviation obtained from the encoder. This shift and scale operation represents a sample from the latent distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Decoder
A

The decoder of a VAE takes a sample from the output distribution of the encoder and decodes it back to a reconstruction of the original input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Loss Function
A

The loss function of a VAE has two components: the reconstruction loss and the KL divergence loss. The reconstruction loss measures how well the VAE reconstructs the original input, and the KL divergence loss measures how much the learned latent distribution deviates from a prior (usually standard normal) distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Applications
A

VAEs are generative models, and thus can be used to generate new data that resembles the training data. This is useful in various applications including image synthesis, anomaly detection, denoising and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Strengths and Weaknesses
A

VAEs are powerful models for learning complex data distributions and generating new data. They also provide an interpretable latent space where directions encode meaningful variations of the data. However, they tend to generate blurrier images compared to other generative models like Generative Adversarial Networks (GANs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly