Generative Adversarial Network Flashcards

Lecture 13

1
Q

What are the two components of an Autoencoder and their purpose?

A

The encoder and the decoder. The encoder takes input and downsizes it. The decoder takes the downsized input and tries to reconstruct it as close the the original input as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between a variational autoencoder and a basic autoencoder

A

Instead of only learning to reconstruct images, a variational autoencoder learns the data distribution of the training data. The latent space is, therefore, distributions and the encoder samples from this distribution to reconstruct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two losses in a variational autoencoder?

A

The L2 loss (How similar the output image is from the input image), and the KL divergence (The distance of the latent space distributions of z from N(0, 1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What can you use a variational autoencoder for?

A

Compression, generation,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some of the problems of a variational autoencoder?

A

You cannot control which features of the training data is represented by the latent space z, neither the position.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the intuition behind a GAN

A

Two networks, generator, and discriminator. The generator generates eg. images and its purpose is to “fool” the discriminator. The discriminator then tries to distinguish between the generated “false” images from the generator and some real images provided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the problem of using KL loss function in GANs? What is the solution? Hint: WGAN - explain this

A

The KL loss function measures the “closeness” of two distributions, i.e how similar they are, this is directly related to how they overlap. However, if the two distributions different entirely, i.e no overlap, then we have close to zero loss, which ruins all further training.

The solution is using WGAN, i.e Wasserstein GANs which is a new loss function for our GAN. This loss function is also called “earth movers distance” since it tries to move a pile of dirt a mean distance in order to look as similar as possible to another pile of dirt. The pile of dirts are naturally our probability distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Briefly explain the CycleGAN

A

Create two discriminators and two generators. One generator is a transform from A to B whilst the other generator is B to A (example: horse to zebra, zebra to horse). Then one discriminator for each domain. Also add loss to make the original horse before horse to zebra and zebra to estimated horse as close as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly