Flashcards in Review Session #2 Deck (8)
True or False: A policy that is greedy -- with respect to the optimal value function -- is not necessarily an optimal policy.
False. An optimal value function captures all (discounted) future rewards. So a greedy policy on optima value function is globally optimal.
True or False: In TD learning, the sum of the learning rates used must converge for the value function to converge.
False, the sum of learning rates must diverge for the value function to converge.
Also, the sum of learning rates squared must converge for the value function to converge.
True or False: Monte Carlo is an unbiased estimator of the value function compared to TD methods. Therefore, it is the preferred algorithm when doing RL with episodic tasks.
True (Monte Carlo is an unbiased estimator of the value function compared to TD methods): TD methods start with an initial estimate of q values and tend to be biased on these.
False (Therefore, it is the preferred algorithm when doing RL with episodic tasks): Most episodic tasks are too long and the computational advantages of TD updates favor TD methods over MC methods.
True or False: Backward and forward TD(lambda) can be applied to the same problems.
True, but backward TD(lambda) is usually easier to compute.
True or False: Offline algorithms are generally superior to online algorithms.
False: Online algorithms (can) do updates online at each step and learn faster.
False. Online algorithms update values as soon as new information is available and makes most efficient use of experiences.
True or False: Given a model (T,R) we can also sample in, we should first try TD learning.
False... You have a model, use it.
True or False: TD(1) slowly propagates information, so it does better in the repeated presentations regime rather than with single presentations.
False. TD(0) propagates slowly while TD(1) propagates information all the way in each presentation.