Generative Models Flashcards

Notes on Generative Models that might help with the exam.

1
Q

What are the two methods for implementing Generative Models?

A

Variational Autoencoders (VAEs)
Generative Adversarial Networks (GANs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the general structure of a VAE?

A

Encoder-decoder structure, constructed by neural networks. The input goes through an Encoder, to become a latent space, then that is passed through a Decoder to produce the output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is regularisation important in the context of VAEs?

A

Without it, the encoder-decoder model can be overfitted to the input data and reconstruct very accurate examples of the training data, but it won’t be able to generate new examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is regularisation implemented in VAEs?

A

The encoder learns to map the input data into a Gaussian distribution, which contains the mean and the covariance matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the structure of the loss function used in VAEs?

A

Reconstruction term - Makes the generated output as similar as to the input
Regularisation term - Makes the learned latent features from the encoder as close as possible to a Gaussian distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does a VAE generate new objects/images?

A

They are generated by sampling the mean and covariance of the Gaussian Distribution, and then inputting that into a decoder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the effect of larger regularisation in VAEs?

A

Larger regularisation makes the reconstruction error larger, hence reconstructed data is less realistic/more blurry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the advantages of VAEs?

A

Learned latent space is well constrained and easy to sample from

Easier to train compared to other generative models - Smaller training set

Broad application scenarios e.g. anomaly detection, data synthesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the disadvantage when using VAEs?

A

The generated images tend to be on the blurry side

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some primary applications for GANs?

A

Super-resolution
Image in-painting
Image synthesis
Text-to-image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the general structure of GANs?

A

Generator - Learns to generate plausible instances that look realistic. These become negative training examples for the discriminator

Discriminator - Learns to distinguish the generator’s fake data from real data, which penalises the generator for producing unreal examples

Both parts are neural networks, which are trained iteratively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Discriminator in regards to GANs?

A

The discriminator is a type of classifier, which could use any network architecture appropriate to the type of data it’s classifying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens during the training stage for the Discriminator in regards to GANs?

A

The Discriminator classifies both real and fake data
The Discriminator loss penalises the Discriminator for misclassifying a real and a fake instance
The Discriminator updates its weights through backpropagation from the Discriminator loss through the Discriminator network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Generator, in regards to GANs?

A

The Generator part learns to create fake data by incorporating feedback from the Discriminator. It learns to make the Discriminator classify its outputs as real.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What happens during the training stage of the Discriminator?

A

During training:
- Input random noise
- The input goes through the Generator’s network to generate a data instance
- The Discriminator classifies the generated data with an output
- Calculate the loss from the Discriminator classification, which penalises the generator for failing to fool the Discriminator based on the output
- Backpropagate through both the Discriminator and Generator to obtain gradients
Use gradients to change only the Generator’s weights, but don’t touch the Discriminator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the general training strategy for GANs?

A

The Discriminator trains for one or more epochs
The Generator trains for one or more epochs
Alternate the above two steps to continue to train the Generator and Discriminator networks
The Generator improves with training and the Discriminator performance gets worse. If the Generator succeeds perfectly, then the Discriminator has a 50% accuracy.
The Discriminator feedback gets less meaningful over time, which makes the training unstable for GANs.

17
Q

What are some common problems with GANs?

A

They are tough to train:
- If the Discriminator behaves badly, the Generator does not have accurate feedback
- If the Discriminator does a great job, the gradient of the loss function drops down to close to zero, and the learning becomes either slow or completely jammed.

During the training, the Generator may collapse to a setting where it always produces similar outputs with low variety - Mode Collapse