Question

From this article here, it says that using a tabular Q function is less scalable than a deep Q network. I assume that this means that the Q table approach works for some environments, but once they become more complex, they aren't as efficient.

For example, the Frozen Lake environment used in the article states that the deep Q network is slower than the Q table. The Frozen Lake environment has a relatively simple environment with 16 states and 4 actions per state. However, in an environment such as a game of snake, there are many more states, making the Q table larger. How should I decide between a Q table and a Deep Q Network?

Was it helpful?

Solution

I haven't worked with reinforcement learning but in general one should judge based on the data (or in this case states). The shift of algorithms from normal ones to deep ones is made when we require more power. So it boils down to your need!

If you can get results with simple algorithms always choose that but if simple algorithms don't cut it you will have to go for the deeper ones.

Not going on the true computational meaning but generally deeper models are more complex and one should always avoid complex models where ever possible.

You also mention "slower". That has a lot of perspective into it so you shouldn't make a final judgement on the models in that aspect!

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top