Final Review Flashcards
(36 cards)
True or False: A policy that is greedy – with respect to the optimal value function – is not necessarily an optimal policy. Why?
False. Behaving greedily with respect to an optimal value function is an optimal policy because it optimizes the long-term cumulative reward.
A greedy policy derived from an optimal value function is necessarily optimal. However a greedy policy derived from just any value function is not necessarily optimal.
True or False: The optimal policy for any MDP can be found in polynomial time. Why?
True for any finite MDP. We can reduce the MDP to linear program and solve it in polynomial time.
True or False: The value of the returned policy is the only way to evaluate a learner. Why?
False, you might also care about the computational- or memory- or sample-efficiency
True or False: If we know the optimal Q values, we can get the optimal V values only if we know the environment’s transition function/matrix. Why?
**verify from review 1
True. Because Q values needs to be multiplied with the corresponding transition probabilities and the summed over the action space to derive optimal Q values.
True or False: An MDP given a fixed policy is a Markov chain with rewards. Why?
True. A fixed policy means same action selection for a given state. That means MDP is a a Markov chain with rewards.
An MDP describes (1) a set of states, (2) a set of actions (3) transition probabilities and (4) rewards for transitioning.
By applying a fixed policy (which defines what action to take in each state), we eliminate the choice of action, which means we’re left with:
[(1) a set of states (3) transition probabilities] (Markov Chain)
[(4) rewards for transitioning]. (w/ Rewards)
True or False: In the gridworld MDP in “Smoov and Curly’s Bogus Journey”, if we add 10 to each state’s reward (terminal and non-terminal) the optimal policy will not change. Why?
**verify from review 1
True. Because the underlying differential rewards across the states do not change by adding 10 to all rewards.
True or False: In the gridworld MDP in “Smoov and Curly’s Bogus Journey”, if we add 10 to each state’s reward (terminal and non-terminal) the optimal policy will not change. Why?
**verify from review 1
True. Because the underlying differential rewards across the states do not change by adding 10 to all rewards.
True or False: In RL, recent moves influence outcomes more than moves further in the past. Why?
False. It depends on the discount factor.
True or False: The Markov property means RL agents are amnesiacs and forget everything up until the current state. Why?
I think True. Markov property means the future can be predicted only with the present.
Explain the two types of tasks modeled by MDPs; unify the notation used to describe these two tasks, and describe the type of state that allows you to unify them. (Hint: you may find section 3.4 of S&B helpful.)
We are talking about episodic and continuous tasks. The returns from these tasks could be unified using a self-loop with zero rewards to the terminal state for an episodic task.
Does policy iteration or value iteration converge faster? If so, why?
Policy iteration converges faster because it takes few iterations than value iterations.
Is it possible that value iteration just never converges on your formulation of MDP with a proper convergence criterion?
No value iteration is guaranteed to converge.
What is the meaning of the bellman equations? What is it saying? What is your way of visualizing this equation in your head?
The long-term return for an action is the sum of the current reward and all possible future rewards.
the long term return for an action is the sum of the current reward and all possible discounted future rewards.
What are the bellman equations? (try to write it down without looking at notes since this is the single most important equation to know for this class)
What is a policy?
choice function for action for a given state.
What is Value Iteration?
value iteration is an iterative algorithm to compute the optimal value function.
What is Value Iteration?
value iteration is an iterative algorithm to compute the optimal value function.
What is the Markov property?
Markov property is a memoryless property: your future is only dependent on your present. It can capture the past in the current state
Is it possible to have multiple terminal states in an MDP?
there is no restriction for an MDP to have multiple terminal states.
True or False: Contraction mappings and non-expansions are concepts used to prove the convergence of RL algorithms, but are otherwise unrelated concepts. Why?
non-expansion is a general case of contraction mapping. For contraction mapping |F(a) - F(b)| <= gamma*|a-b| where gamma needs to be strictly smaller than 1. But for non-expansion the gamma value can be 1 so it allows the distance to be smaller or remain equal
True or False: An update rule which is not a non-expansion will not converge without exception. Why?
In general, True. But as an ultimatum, this is False.
Non-expansion is a condition that guarantees convergence. However, the lack thereof does not guarantee the lack of convergence.
True or False: In TD(λ), we should see the same general curve for best learning rate (lowest error), regardless of λ value. Why?
The Lambda matters. In fact the lambda has some effect on the learning rate, as it is somehow (the summation of its exponents multiplied by xt) a multiplier to (Pt+1 - Pt) as well. You will observe that at higher lambdas you will need to lower your alpha in order not to exceed the threshold necessary for convergence
True or False: TD(1) slowly propagates information, so it does better in the repeated presentations regime rather than with single presentations. Why?
False.
TD(0) is == to Monte Carlo learning which updates very slowly. TD(1) propagates information all the way down the “chain” for every presentation.
True or False: Given a model (T,R) we can also sample in, we should first try TD learning. Why?
False.
While TD learning was revolutionary and is used in many algorithms today, value or policy iterations have better convergence properties and should by tried first.