Question

I'm working my way through the book Reinforcement Learning by Richar S. Sutton and Andrew G. Barto and I am stuck on the following question.

The value of a state depends on the the values of the actions possible in that state and on how likely each action is to be taken under the current policy. We can think of this in terms of a small backup diagram rooted at the state and considering each possible action:

enter image description here

Give the equation corresponding to this intuition and diagram for the value at the root node formula2 , in terms of the value at the expected leaf node, formula , given formula3 . This expectation depends on the policy, formula4 . Then give a second equation in which the expected value is written out explicitly in terms of formula5 such that no expected value notation appears in the equation.

I should mention that...

formula7 ...

Where...

formula6 = Probability of taking action a from state s

formula8 = Given any state s and a, the probability of each next state s'

formula9 = Expected reward given any state s, next state s, and action a

How can I re-evaluate this value function in the way that is asked?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top