! 12: Adversial Examples Flashcards

1
Q

Adversarial Examples

A
  • input created from applying small but intentionally change to existing image
  • -> wrong classification (create new input looks as close to its original as possible but gets missclassified)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Adversial Examples Goal

A
  • Make model robust: Models missclassify less inputs (usually missclassifiy slightly different from truly classiflied)
  • make applications saver if find “limits” of inputs of model classifier and fake images for influence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Poisoning

A
  • injection of adversarial data into training data -> decrease performance
  • goal: model perform task (with similar accuracy) under attack
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Adversial Attack

A
  • deliberate attempt tofool a model by introducing carefully created modifications to the input data
  • goal: cause the model to make incorrect predictions or classification
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Adversial Attack - Form

A
  • Untargeted adversial attack: cannot control output label of adversial image
  • Targeted adversarial attack: can
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Adversial Attack - Types

A
  • White box: Attacker has access to training method (data/algorithm/hyperparameter): small perturbations -> bad performance
  • Black box: doesn’t have complete access
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Generation Method of Adversial Examples

A

Fast Gradient Sign Method
- Input image -> CNN -> prediction
- Compute loss of prediction based on true label
- Calculate gradient of loss with respect to input image
- Compute gradient sign = epsilon > use to create adversarial sample (output): pixel + eps*noise = new_pixel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Adversial Training as Defense Tactices

A
  • Reactive Strategy: 2 different models -> unefficient (2* infrastructure needed)
  • Proactive Strategy: use adversial example in training of model -> learns to classify those -> better performance overall
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Generative Adversarial Network (GAN)

A
  • composed of 2 NN
    1. Generator: takes random noise as input & creates fake images
    1. Discriminator: takes fake image from generator or real from training set & guesses wheather fake or real
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GAN - Training

A
  • Discriminator Training: Batch with real & fake images in trainng (binary cross-entropy loss)
  • Generator Training: produces another batch of fake images, discirmantor used (no real images included!)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GAN - Difficulties in Training

A
  • Generator outputs become less diverse
  • if G produces perfect realistic images, discriminator needs to guess
  • G & D constantly try to outsmart each other (parameters might become unstable)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Deep Convolutional GAN (DCGAN)

A
  • GANs based on deeper convolutional nets for larger images
  • pooling layer replaces by discrimnator
  • convoluionts replaces by generator
  • remove fully connected hidden layers for deeper architectures
  • BatchNormalization in d & g (except: g output & d input layer)
  • d: leaky ReLU activation for all layers
  • g: ReLU activation for all (except: output use tanh)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Virtual adversarial training

A
  • semi-supervised regime & unlabeled examples
How well did you know this?
1
Not at all
2
3
4
5
Perfectly