Week 1 Flashcards

1
Q

Wat zijn de 5 onderdelen van een leerprobleem?

A

1) Input x in X.
2) Output y in Y.
3) Doelfunctie f: X -> Y.
4) De dataset met (x1, y1)…
5) De geleerde hypothese g: X -> Y.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Wanneer is ML toepasbaar op een probleem? Geef 3 voorwaarden.

A

1) Er is een patroon.
2) Het lukt niet om dit patroon te beschrijven door het probleem te analyseren.
3) Er zijn gegevens waar we uit kunnen leren.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is H in ML?

A

The hypothesis set of candidate formulas that are under consideration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the role of h(x)?

A

A functional form that assigns the weights to the different components of the input vector.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When is a dataset linearly separable?

A

There is a choice for the parameters that classifies all the training examples correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

PLA

A

perception learning algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the goal of the perceptron learning algorithm?

A

Finding a hypothesis that classifies all the data points in data set D correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Supervised learning setting

A

When the training data contains explicit examples of what the correct output should be for given inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Waar bestaat de hypotheseruimte uit bij k-nearest neighbors?

A

Bijna alle functies van inputs naar outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Active learning

A

The data set is acquired by the learner through asking for a label for specific entries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the standard formula for h(x)?

A

h(x) = sign(w.T * x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Online learning

A

The data set is given to the algorithm one example at a time. Learning takes place as data becomes available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Transfer learning

A

When training an algorithm on data results in a model, and that model is used on a new problem or task. It uses the info learned on the first problem to improve on the second one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the update formula for w?

A

w(t+1) = w(t) + y(t)*x(t)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reinforcement learning

A

The training example does not contain the target output, but contains some possible output together with a measure of how good that output is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

in-sample error

A

E.in(h): The error rate within a sample: the fraction of the data set where h and f disagree.
Example: the mistakes on a practice test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Wat is een pluspunt van het PLA?

A

Het doorzoekt een oneindig grote verzameling hypothesen.

18
Q

What kind of data do you need to use the Perceptron Learning Algorithm?

A

Linearly separable data.

19
Q

Give the formula for E.in(h):

A

In-sample error:

= 1/N * the amount of datapoints in the sample where h(x) and f(x) disagree.

20
Q

What does the out-of-sample error denote?

A

How accurately the hypothesis function performs on data it hasn’t seen before.
Example: performance on exam.

21
Q

What is the deterministic answer to ‘Does the data set D tell us anything outside of D that we didn’t know before?’?

A

No.
D tells us something certain about f outside of D.

22
Q

What is the probabilistic answer to ‘Does the data set D tell us anything outside of D that we didn’t know before?’?

A

Yes.
D tells us something likely about f outside of D.

23
Q

What are the two questions that present the feasability of learning?

A

1) Can we make sure that Eout(g) is close enough to Ein(g)?
2) Can we make Ein(g) small enough?

24
Q

What effect does a more complex H have?

A

It gives more flexibility in finding some g that fits the data well, leading to small Ein(g).

25
Q

What effect does a complex f have?

A

We get a worse value for Ein(g).

26
Q

Noisy function

A

A function where the output is not uniquely determined by the input.

27
Q

How does neigbors-based classification learn?

A

Does not attempt to construct a general or internal model, but simply stores instances of the training data.

28
Q

Wat is mu in het model met knikkers en vazen?

A

De proportie rode knikkers in de vaas.

29
Q

What is the formula for the line that classifies the datapoints?

A

w1x1 + w2x2 + b = 0

30
Q

Wat is de onbekende in het knikkers/vazenmodel?

A

Mu, de proportie rode knikkers in de vaas.

31
Q

Unsupervised learning

A

The training data does not contain any output information at all.

32
Q

Data mining

A

A practical field that focuses on finding patterns.

33
Q

What is nu in the marbles-vases model?

A

The fraction of red marbles within a random sample of N marbles that you pick from the vase.

34
Q

What does the Hoeffding Inequality denote?

A

The maximum probability that the sum of the bounded independent random variables is different from its expected value more than a certain amount x.
Example: for a fixed N, a higher precision of the difference between E.in & E.out makes the probability of that difference lower.

35
Q

Why is the Hoeffding Inequality important for machine learning?

A

Through the Hoeffding Inequality, learning, generalizing to unknown data, is made possible without knowing the target function.
This is because neither nu nor mu are needed to bound the probability in the H.I.

36
Q

What is the Hoeffding Inequality used for?

A

It quantifies the relationship between nu and mu.

37
Q

Give the Hoeffding Inequality:

A

P[|nu-mu| > epsilon] <_ 2e ^ (-2*epsilon ^2 *N)

The probability that the difference between the fraction of red marbles in the random sample and the actual fraction of red marbles in the vase is bigger than epsilon,
is smaller than or equal to the right-side.
Epsilon is a positive value we choose, how much nu and mu can be different.

38
Q

What is always true about E.in(h), E.out(h) and epsilon?

A

|E.in(h) - E.out(h)| > epsilon

39
Q

If event B1 implies event B2, thus
B1 -> B2,
then…

A

The probability of event B1 is smaller or equal to the probability of event B2, thus

P[B1] <_ P[B2]

40
Q

Describe the nearest-neigbors algorithm:

A

Given an input, search for the closest input we have seen and copy that output.