Question

I'm trying to understand the difference between target-values and action-values in Deep Q Networks.

From what I understand, action-value tries to approximate the reward of a given action (at some state). The target-value is also an approximate of the reward. How are they different?

Reference
- https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top