Things i should know Flashcards

1
Q

What is the curse of dimensionality wrt supervised learning

A

The number of training examples needed to correctly learn increases exponentially as we increase the dimensionality of our inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Advantages of hierarchical clustering over K means and visa versa

A
  • don’t need to know clusters in advance
  • easy to understand
  • Faster
  • Easier to implement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are semantically different hypotheses?

A

All permutations of hypotheses being where each element can be x+1 things. x being its possible values. +1 being ?. We also add 1 to represent the specific hypothesis. i.e (x+1)(y+1)…+1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are syntactically different hypotheses?

A

All permutations of hypotheses being where each element can be x+2 things. x being possible instance values. +2 being ? and ø.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is a rote learner biased?

A

No. It is trained on the set of arbitrary disjunct hypotheses and is therefore not biased. This does mean that it has overfit the training data and will not be able to generalise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What fundamentally classifies a markov decision process?

A

Past actions do not affect the transition function. The markov property is when the transition function is preserved throughout the entire process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly