Reinforcement Learning Flashcards

1
Q

Was sind Agents?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Wann ist ein Agent autonomous?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Was sind Rational Agents?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Was sind Reflexive Agents?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Was sind Agents with internal state?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Was sind Goal-based agents?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Was sind Agents with some use function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Beschreib den Markov Decision Process

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Was besagt die Markov Property?

A

Not dependent on history

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Was ist epsilon-greedy?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Wie wirken sich die Wahl von epsilon und beta auf epsilon-greedy aus?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Was ist eine q-table?

A

One-hot state encodings x One-hot action encodings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Wie funktioniert tabular rl?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Wie kann man Tabular RL mit q-tables als Deep RL realisieren?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Deep RL

Wofür ist das implicit model of action selection?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Wie funktioniert Temporal Difference (TD) Learning?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Nenn die Bellman equation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

TD-learning advises to adapt the Q-value for the current (s,a). How?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Beschreib den SARSA Algorithmus

A
19
Q

Vergleiche SARSA und Q-Learning

A
20
Q

Erkläre Actor-critic Learning

A
21
Q

Erkläre Deep RL für SARSA or Q-Learning.
Wie backpropagatet man?

A
22
Q

Was ist value-based RL?

A
23
Q

Was ist policy-gradient based RL?

A

maximize the expected future return R

24
Q

Wie schafft man es, dass bei Policy Gradient based RL alle Gewichte trainiert werden können?

A
25
Q

Was ist goal-conditioned RL?

A
26
Q

Was ist Experience Replay?

A

Performing many trials in the environment can be costly.
Solution: learn multiple times from the experience:

27
Q

Was ist Hindsight Experience Replay (HER)?

A

Agent knows how to get to any experienced state

28
Q

Was ist Hierarchical RL?

A

higher level tells lower level on which goal to perform

29
Q

Wie funktioniert Model-based RL? Wie bestimmt man eine Aktion?

A

like a tree search

30
Q

Was sind Limitations von Model-based RL?

A

exponential number of states/actions; cycles

31
Q

Erkläre: Advantage of Model-based vs Model-free RL

A
32
Q

Beschreib die Architektur von MuZero

A
33
Q

Beschreibe MuZero: Planning by Monte-Carlo Tree Search

A
34
Q

Wie wird MuZero trainiert?

A

via Experience Replay

35
Q

Wie kann man die MuZero Architektur erweitern? Was bringt das?

A
36
Q

Wie kann man MuZero im Bezug auf continous Actions erweitern?

A
37
Q

Which of the following statements on Hierarchical Reinforcement Learning are correct?
1. If the subtasks don’t yield rewards individually, Hierarchical RL can’t be used.
2. Hierarchical Agents always use the same set of actions on each level of the hierarchy.
3. In Hierarchical RL high-level agents set the goals for lower-level agents.

A

3

38
Q

Assume we have the following information on 2-itemsets generated with the Apriori algorithm:
Frequent: {𝐴,𝐵}, {𝐴,𝐶}

Not frequent: {𝐴,𝐷}

Which 3-itemsets could potentially be tested against in the following iteration of the Apriori algorithm, independent of any additional information on other itemsets?

  1. {𝐴,𝐵,𝐷}
  2. {𝐴,𝐶,𝐷}
  3. {𝐵,𝐶,𝐷}
  4. {𝐴,𝐵,𝐶}
A

4

39
Q
A

0,2/0,4=0,5

40
Q

Which of the following statements on Policy Gradient methods and the REINFORCE algorithm are correct?
1. The REINFORCE algorithm learns how to estimate the best Q-Values.
2. Policy Gradient-based RL learns to estimate the probabilities of the actions for a given state directly.
3. The REINFORCE algorithm uses the TD-error to update the network weights.

A

2

41
Q

Which of the statements about the k-Nearest Neighbors (k-NN) algorithm are correct?
1. k-NN can also be used for regression.
2. k-NN can be used for imputing missing values of both categorical and continuous variables.
3. k-NN performs much better if all of the data have the same scale.
4. k-NN is only defined for Euclidean distance metric.

A

1, 2 und 3

42
Q

The goal of model-based RL strategies is to reduce the complexity of searching for the best solution, in an environment in which the dynamics are known.
Stimmt das?

A

Ja

43
Q

Which of the following statements on the MuZero algorithm are correct?
1. In MuZero, a model of the world has to be provided.
2. MuZero uses planning in the latent space to find the best action.
3. For each episode, only one search tree has to be created.
4. The dynamics function maps a hidden state and an action to another hidden state.

A

2 und 4

44
Q

One of the major disadvantages of the Apriori algorithm is high computational complexity. Which of the following factors are predominantly responsible for this?
1. A high number of items in each itemset
2. A low minimum confidence
3. A high minimum support
4. A high number of itemsets in the database

A

1 und 4

45
Q
A

2/5=0,4

46
Q

In Hindsight Experience Replay the reward scheme is already known while the experiences are gathered.
Stimmt das?

A

Nein