DL-09 - Generative models Flashcards

1
Q

DL-09 - Generative models

What are the types of generative models mentioned in the lecture slides? (2)

A
  • Variational autoencoders
  • Generative adversarial networks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

DL-09 - Generative models

What is VAE short for?

A

Variational autoencoder

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

DL-09 - Generative models

What is GAN short for?

A

Generative Adversial Networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

DL-09 - Generative models

What is a loss function you might use in an autoencoder

A

MSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

DL-09 - Generative models

What is a key consideration when building an autoencoder related to the latent space?

A

The dimensionality of the latent space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

DL-09 - Generative models

The dimensionality of the latent space impacts what in an autoencoder?

A

The quality of the results, with smaller latent spaces typically leading to poorer outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

DL-09 - Generative models

What are the two main approaches for implementing decoders in autoencoders? (2)

A
  • Traditional techniques like k-NN (Average value of nearest pixels) or bilinear interpolation
  • Transposed convolution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

DL-09 - Generative models

Autoencoders typically use _______ for implementing decoders. (1)

A

Transposed Convolution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

DL-09 - Generative models

What is a transposed convolution?

A

Upscaling of an image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

DL-09 - Generative models

How do you perform a transposed convolution?

A

1) Multiply the input by the kernel.
2) Position the intermediate value inside a larger tensor.
3) Sum up all the values.

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

DL-09 - Generative models

What is the main difference between AEs and VAEs?

A
  • AEs are deterministic, same output
  • VAEs are probabilistic (random), can generate new output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DL-09 - Generative models

What is the architecture for a VAE?

A

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DL-09 - Generative models

What model is in the image? (See image)

A

A variational autoencoder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

DL-09 - Generative models

Which format does VAE use to describe each latent attribute?

A

VAE uses probability distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DL-09 - Generative models

How does VAE generate a vector for the decoder model?

A

VAE randomly samples from each latent state distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

DL-09 - Generative models

What does an ideal autoencoder learn?

A

Descriptive attributes of input data in a compressed representation.

E.g. from a face:
- Smile
- Gender
- Beard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

DL-09 - Generative models

Describe visually how VAEs represent latent attributes.

A

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

DL-09 - Generative models

What is a latent attribute?

A

A latent attribute is a hidden descriptive feature of the data, such as gender, emotion, or skin tone in facial images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

DL-09 - Generative models

Describe how we generate new data with a VAE.

A
  • Sample latent attributes.
  • Send to decoder.
  • Decoder generates output.

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

DL-09 - Generative models

What should happen for values that are close in latent space?

A

They should produce very similar reconstructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

DL-09 - Generative models

How can we achieve an interpretation of what a VAE network is learning?

A

By perturbating one latent variable while keeping all other variables fixed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

DL-09 - Generative models

What is an approach to encourage independence of latent features in VAEs?

A

Applying independent component analysis (ICA) to the encoder output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

DL-09 - Generative models

Why do we want features to be uncorrelated?

A

To learn the richest and most compact representation possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

DL-09 - Generative models

If your representations are rich and compact, what feature do we need in the latent space?

A

We want features to be uncorrelated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

DL-09 - Generative models

What terms does the VAE loss consist of?

A

Reconstruction loss + KL divergence term
(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

DL-09 - Generative models

What is KL divergence short for?

A

Kullback-Leibler divergence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

DL-09 - Generative models

What is KL divergence?

A

KL divergence is a measure of how one probability distribution differs from another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

DL-09 - Generative models

How do you compute KL divergence for VAE loss? (See image)

A

By setting a fixed prior distribution 𝑝(𝑧) for 𝑞(𝑧|𝑥^hat) based on some initial hypothesis or guess, and model learns 𝑝(𝑧|𝑥) using this prior.

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

DL-09 - Generative models

What is the general formula for KL divergence?

A

(See image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

DL-09 - Generative models

What formula is this?

A

(See image)

31
Q

DL-09 - Generative models

What prior distribution do we assume for p(z) in VAE?

A

A normal gaussian with mean=0 and sd=1.

32
Q

DL-09 - Generative models

What is the actual formula we use for KL divergence in a VAE?

A

(See image)

33
Q

DL-09 - Generative models

Why would it be difficult to backprop in VAEs?

A

we cannot backpropagate gradients through sampling layers because of the stochastic nature. (See image)

34
Q

DL-09 - Generative models

What trick we use to make backprop work in VAEs?

A

Reparameterization.

(See image)

35
Q

DL-09 - Generative models

Describe how the reparameterization trick works for VAEs.

A

(See image)

36
Q

DL-09 - Generative models

How can VAEs be used to uncover bias in a dataset?

A

(see image)

37
Q

DL-09 - Generative models

How can we use VAEs to automatically debias data?

A

(See image)

38
Q

DL-09 - Generative models

How can (V)AEs denoise images?

A

Train a VAE with noisy inputs and predict clean output.

39
Q

DL-09 - Generative models

How can (V)AEs be used for anomaly detection?

A

Measure reconstruction loss. If above some threshold based on training data, assume outlier.

40
Q

DL-09 - Generative models

What is the main idea behind GAN?

A

The back-and-forth competition between the discriminator and the generator (forger).

41
Q

DL-09 - Generative models

Which two components compete in a GAN?

A

The generator (forger) and the discriminator.

42
Q

DL-09 - Generative models

What is the generator’s role in a GAN?

A

The generator learns to create plausible data from noise/random input.

43
Q

DL-09 - Generative models

What is the discriminator’s role in a GAN?

A

The discriminator learns to distinguish between fake data produced by the generator and real data.

44
Q

DL-09 - Generative models

How do GANs improve their performance?

A

By training both models simultaneously through an adversarial process.

45
Q

DL-09 - Generative models

What is the goal of a generative network in a GAN?

A

To produce data that are indistinguishable from real data, e.g. images.

46
Q

DL-09 - Generative models

What input does a generator work on?

A

Randomly sampled noise.

47
Q

DL-09 - Generative models

Describe how backprop works in the discriminator.

A

(See image)

48
Q

DL-09 - Generative models

Describe how backprop works in the generator.

A

(See image)

49
Q

DL-09 - Generative models

How does the GAN training process go?

A

In alternating periods.
- The discriminator trains for 1+ epochs.
- Then the generator trains for 1+ epochs.

50
Q

DL-09 - Generative models

What are some commonly used loss functions for GANs? (2)

A
  • Min-max loss
  • Wasserstein
51
Q

DL-09 - Generative models

What’s the formula for min-max loss?

A

(See image)

52
Q

DL-09 - Generative models

In min-max loss, what are the different terms?

A
53
Q

DL-09 - Generative models

What roles do the discriminator/generator have in min-max loss? I.e. how do they impact it? (2)

A
  • Generator: minimize
  • Discriminator: maximize
54
Q

DL-09 - Generative models

What are some common issues with min-max loss? (4) (VMUC)

A
  • vanishing gradients
  • mode collapse
  • unbalanced updates
  • convergence
55
Q

DL-09 - Generative models

In min-max loss, when does the problem of vanishing gradients occur?

A

If the discriminator is too good.

56
Q

DL-09 - Generative models

In min-max loss, what is mode collapse?

A

a situation when the discriminator is too effective, causing generator to focus on producing only a few types of outputs rather than a diverse range of outputs.

57
Q

DL-09 - Generative models

In min-max loss, what is the problem of unbalanced updates?

A

min-max loss requires generator and discriminator be trained alternately that can lead to unbalanced updates, causing one dominating the other => instability.

58
Q

DL-09 - Generative models

What does the generator do when mode collapse occurs in min-max loss?

A

The generator produces only a few types of outputs instead of a diverse range.

59
Q

DL-09 - Generative models

In min-max loss, what can lead to unbalanced updates and instability?

A

Training the generator and discriminator alternately.

60
Q

DL-09 - Generative models

In min-max loss, What is the problem of convergence?

A

Convergence is when the generator gets better and the discriminator performs worse, leading to random feedback and degrading generator quality.

61
Q

DL-09 - Generative models

What is WGAN short for?

A

Wasserstein GAN

62
Q

DL-09 - Generative models

What is the difference between Wasserstein GANs and normal GANs?

A

Wasserstein GANs (WGANs) use a critic network, which outs real values instead of the discriminator’s binary real/fake.

63
Q

DL-09 - Generative models

How are Wasserstein GANs different from a GAN with min-max loss?

A
  • Discriminator: Outputs are binary (either real/fake).
  • Critic network: Outputs are real, e.g. probability of real/fake.
64
Q

DL-09 - Generative models

What components does the Wasserstein loss consist of? (2)

A
  • Critic loss
  • Generator loss
65
Q

DL-09 - Generative models

What’s the formula for WGAN’s critic loss?

A

D(x) - D(G(z))
Disciminator tries to maximize this.

D(x) - critic loss for a real instance
D(G(z)) - Critic’s eval of a fake instance

66
Q

DL-09 - Generative models

What’s the formula for WGAN’s generator loss?

A

D(G(z)) - Generator tries to maximize the critic’s eval of a fake instance

67
Q

DL-09 - Generative models

What are some benefits of using WGANs? (2)

A

Less vulnerable to:
- Vanishing gradient
- Mode collapse

68
Q

DL-09 - Generative models

What is a progressive GAN?

A

A GAN that starts out with small network producing low-resolution images, then adds more layers over time.

69
Q

DL-09 - Generative models

What are some benefits of using a progressive GAN? (2)

A
  • Faster training
  • Higher resolution outputs
70
Q

DL-09 - Generative models

What is DCGAN short for?

A

Deep Convolutional GAN

71
Q

DL-09 - Generative models

Explain what an Image-to-image translation GAN is.

A

Takes an image as input and map it to a generated output image with different properties. (See image)

72
Q

DL-09 - Generative models

Describe what a Cycle GAN is.

A

CycleGANs learn to transform images from one set (domain) into images that could plausibly belong to another. (See image)

73
Q

DL-09 - Generative models

What is a Super-resolution GAN?

A

A GAN that tries to unblur an input image. (See image)

74
Q

DL-09 - Generative models

What is Face inpainting?

A

(See image)