Neural Network Learning Rate vs Q-Learning Learning Rate
-
30-10-2019 - |
Question
I'm just getting into machine learning--mostly Reinforcement Learning--using a neural network trained on Q-values. However, in looking at the hyper-parameters, there are two that seem redundant: the learning rate for the neural network, $\eta$, and the learning rate for Q-learning, $\alpha$. They both seem to change the rate at which the neural net takes new conclusions over old ones.
So are these two parameters redundant? Do I need to worry about even having $\alpha$ as anything other than 1 if I'm already tuning $\eta$, or do they have ultimately different effects?
No correct solution
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange