Final Review pt. 4 Flashcards

1
Q

True or False: The only algorithms that work in POMDPs are planning algorithms. Why?

A

False. RL algorithm also works for POMDP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or False: Problems that can be represented as POMDPs cannot be represented as MDPs. Why?

A

False. MDP is a special kind of POMDP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

True or False: Applying generalization with an “averager” on an MDP results in another MDP. Why?

A

True. Any generalization of an MDP results in another MDP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True or False: With a classic update using linear function approximation, we will always converge to some values, but they may not be optimal. Why?

A

False. it may not even converge. Consider the Baird counterexample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True or False: RL with linear function approximation will not work on environments having a continuous state space. Why?

A

True. Because a linear function approximation would fail to capture non-linearities and feature interactions in a continuous state space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Let’s say you want to use Q-learning with some function approximator. Recall that we learned a convergence theorem and we used that to conclude that Q-learning converges. Can we apply that theorem to prove that your Q-learning with some function approximator converges? Why or why not?

A

False. Adding functional approximator might leads to divergence. Like we have seen in DQN for project 2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Let’s say you want to use a function approximator like we learned in class. What function(s) are you approximating? What’s the input of that function and what’s the output of that function?

A

We can approximate action value, Q, which takes state and action pairs as inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

We learned about reward shaping in class. Could it be useful for solving Lunar Lander? If so, why and how?

A

Reward shaping could be useful given the high-dimensional state space and the agent is being trained to reach a particular point. I think it will accelerate the learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Observe that the biggest difference between P2’s Lunar Lander problem and HW4’s Taxi problem is that there are infinitely many states in Lunar Lander. What are some good methods to handle this case? What are their pros and cons?

A

DQN is very useful to handle high-dimensional state space such as Lunar Landing. Other methods include Model-based DreamerV2, imitation learning, and different policy gradient algorithms such as REINFORCE, PPO, A2C, and SAC [7]. While these algorithms provide superior accuracy, they are difficult to train because of non-convexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly