Ensemble Learning Flashcards

1
Q

Was ist Ensemble Learning?

A

select a collection (ensemble) of hypotheses (models) and combine their predictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Wie kann man Diversität für Ensemble Learning erreichen?

A
  • Using different learning algorithms
  • Using different hyper-parameters in the same algorithm
  • Using different input representations, e.g. different
    subsets of input features
  • Using different subsets of training data (bagging, boosting and cascading)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Erkläre, wie diversity from differences in input features erreicht wird

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Erkläre, wie Diversity from subsets of training data erreicht wird

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Wie kann man den Output von Base Learner beim Ensemble Learning kombinieren?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Wieso ist der ensemble error geringer als der individuelle error?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Wieso hat man einen geringeren Error of Ensemble via Voting?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Erkläre die Rank-Level Fusion Method

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Was sind die Vorteile von Bagging?

A
  • For noisy data: not considerably worse, more robust
  • Improved accuracy in prediction
  • Decreases error by decreasing the variance in the results
    due to unstable learners
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Erkläre die Random Subspace Method

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Wieso ist die random subspace method sinnvoll für random forests?

A

this prevents different trees from choosing the same features for splits, which would make them highly correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Wofür wird die Dropout Technik genutzt? Und was ist das?

A

Um zu vermeiden, dass NNs overfitten

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Wie funktioniert Boosting?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Boosting

Was ist der Unterschied zwischen Strong und Weak Learners?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Wie kann man weak learners boosten, sodass sie ein strong learner werden?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Wie funktioniert AdaBoost (Adaptive Boosting)?

A
17
Q

Was ist der Vorteil von Boosting gegenüber von Bagging?

A
  1. Needs smaller number of training samples than bagging
  2. faster computation
  3. higher accuracy
18
Q

Was sind Vorteile von Boosting?

A
19
Q

Was ist Gradient Boosting? Was ist der Unterschied zu AdaBoost?

A
20
Q

Welche Laufzeit hat Boosting für Face Detection?

A
21
Q

Beschreib Attentional Cascades

A

Chain classifiers that are progressively more complex and
have lower false positive rates

22
Q

Vergleiche AdaBoost mit MLPs

A
23
Q

Vergleiche Bagging mit Boosting.
Wie wird eine Entscheidung via Voting getroffen?

A
24
Q

AdaBoost can be used both to select a subset of informative features and to act as classifier for face detection problems.
Stimmt das?

A

Ja

25
Q

The information stored in a replay buffer depends on the learned parameters of the agent.
Stimmt das?

A

Nein

26
Q

In a markov decision process, the probability of moving from state s to state s’ by action a can depend on the actions leading to state s.
Stimmt das?

A

Nein

27
Q

Which of the following statements on RL algorithms are correct?
1. SARSA updates are based on the chosen action.
2. In Q-Learning, an episode has to be played to the end before it is possible to update the Q-values.
3. Q-learning is an on-policy algorithm.
4. SARSA uses a monte-carlo estimate of the return to calculate the Q-values.

A

1

28
Q

In Q-Learning, the TD error is the difference between the predicted quality of a state action pair 𝑄(𝑠,𝑎)
and the bootstrapped return which takes the actual reward for action a in state s into account.
Stimmt das?

A

Ja

29
Q

Bagging, Boosting or both?

A