我正在计算和优化外部进程上使用的某些变量,但是我遇到了“无梯度”的错误。

代码的大量简化(未测试)版本,但是您可以得到这个想法:

def external_process (myvar):
    subprocess.call("process.sh", myvar)
    with open('result.json', 'r') as f:
        result = json.load(data, f)
    return np.array(result["result"])

myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)

loss = tf.reduce_sum( tf.py_func(external_process, [myvar], [tf.float32])[0] )

optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)

我看到了这个讨论,但我不完全理解: https://github.com/tensorflow/tensorflow/issues/1095

谢谢!

有帮助吗?

解决方案 2

正确的答案是:“外部过程是无法区分的(除非您知道每个细节,在这种情况下是不可能的),因此该问题应作为加强学习问题而面对的”

其他提示

这里 https://www.tensorflow.org/versions/r0.9/api_docs/python/framework.html (搜索 gradient_override_map)是一个例子 gradient_override_map:

@tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
  # ...

with tf.Graph().as_default() as g:
  c = tf.constant(5.0)
  s_1 = tf.square(c)  # Uses the default gradient for tf.square.
  with g.gradient_override_map({"Square": "CustomSquare"}):
    s_2 = tf.square(s_2)  # Uses _custom_square_grad to compute the
                          # gradient of s_2.

因此,可能的解决方案可能是:

@tf.RegisterGradient("ExternalGradient")
def _custom_external_grad(unused_op, grad):
    # I don't know yet how to compute a gradient
    # From Tensorflow documentation:
    return grad, tf.neg(grad)

def external_process (myvar):
    subprocess.call("process.sh", myvar)
    with open('result.json', 'r') as f:
        result = json.load(data, f)
    return np.array(result["result"])

myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)

g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": "ExternalGradient"}):
    external_data =  tf.py_func(external_process, [myvar], [tf.float32])[0]

loss =  tf.reduce_sum(external_data)

optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)
许可以下: CC-BY-SA归因
scroll top