ML Security Flashcards

1
Q

How is AI different from ML?

A

AI focuses on decision-making, while ML focuses on learning how to perform tasks from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some threats to ML models?

A
  1. Evasion attacks
  2. Data poisoning
  3. Membership inteference
  4. Model stealing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is ML applied in security?

A

Used for tasks like malware detection, spam detection, intrusion detection, fraud detection, and cyber defence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an adversarial example in ML?

A

Slightly perturbed inputs designed to fool ML models into making incorrect predictions with high confidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a data poisoning attack?

A

Injection malicious data into the training set to alter the model’s behavior, such as shifting decision boundaries or facilitating adversarial attacks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a membership interference attack?

A

An attack where an adversary determines if a specific data point was part of the model’s training data, compromising privacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is model stealing?

A

An attack where an adversary replicates a model’s functionality by querying it and learning its behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why do security challenges require moving beyond I.I.D. assumptions?

A

Attackers can craft inputs that are not independent or identical to training data, exploiting vulnerabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is adversarial training?

A

A defense technique where adversarial examples are included in the training process to improve the model’s robustness against such attacks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is deep learning considered vulnerable?

A

It relies heavily on large datasets, which may not always be trustworthy, and is resource-intensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a key challenge related to fairness in ML?

A

Ensuring the model is not biased against protected classes or groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is certified robustness in ML security?

A

It refers to the formally verified guarantees about a model’s resistance to adversarial attacks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some safety concerns with ML systems?

A

Issues include susceptibility to fake mesia (e.g. deepfakes) and ensuring fair treatment across demographic groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Who are the main entities in an ML system?

A

Data providers, model trainers, model evaluators, and model users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between discriminative and generative models in ML?

A
  • Discriminative: Predicts the output y given the input x (e.g. classifiers)
  • Generative: Models that joint probability distribution P(x, y) or generates data similar to the input distribution (e.g., image generation).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define the following terms in ML:
1. Model
2. Input
3. Output
4. Training Algorithm
5. Parameters

A
  1. Function f_0(x) that maps input to output
  2. Independent variable (e.g. x)
  3. Dependent variable (e.g. y)
  4. Process that adjusts parameters to minimize loss.
  5. Tunable components of the model learned during training
17
Q

What is adversarial noise in ML?

A

Small, carefully crafted perturbations added to inputs that cause a model to make incorrect predictions while appearing unchanged to humans.

18
Q

Why is trusting training data important in ML security?

A

Untrusted data can lead to vulnerabilities such as data poisoning attacks, which degrade model performance or enable adversarial examples.

19
Q

What is an example of ML-generated safety concerns?

A

Fake content, such as deepfake videos or neural fake news, which can spread misinformation or harm credibility.