Tensorflow - How to set gradient of an external process (py_func)?
-
16-10-2019 - |
Question
I'm computing and optimization of some Variables that are used on an external process, but I get the error "No Gradient".
A heavily simplified (not tested) version of the code, but you can get the idea:
def external_process (myvar):
subprocess.call("process.sh", myvar)
with open('result.json', 'r') as f:
result = json.load(data, f)
return np.array(result["result"])
myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)
loss = tf.reduce_sum( tf.py_func(external_process, [myvar], [tf.float32])[0] )
optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)
I saw this discussion but I don't fully understand it: https://github.com/tensorflow/tensorflow/issues/1095
Thanks!
Solution 2
The correct answer is: "An external process is not differentiable (unless you know each detail, what is impossible in this case), so this problem should be faced as a Reinforcement Learning problem"
OTHER TIPS
Here https://www.tensorflow.org/versions/r0.9/api_docs/python/framework.html (search gradient_override_map
) is an example on gradient_override_map
:
@tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
# ...
with tf.Graph().as_default() as g:
c = tf.constant(5.0)
s_1 = tf.square(c) # Uses the default gradient for tf.square.
with g.gradient_override_map({"Square": "CustomSquare"}):
s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
# gradient of s_2.
So, a possible solution could be:
@tf.RegisterGradient("ExternalGradient")
def _custom_external_grad(unused_op, grad):
# I don't know yet how to compute a gradient
# From Tensorflow documentation:
return grad, tf.neg(grad)
def external_process (myvar):
subprocess.call("process.sh", myvar)
with open('result.json', 'r') as f:
result = json.load(data, f)
return np.array(result["result"])
myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)
g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": "ExternalGradient"}):
external_data = tf.py_func(external_process, [myvar], [tf.float32])[0]
loss = tf.reduce_sum(external_data)
optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange