Monday, 1 March 2021

Conversion from tf.gradients() to tf.GradientTape() returns None

I'm migrating some TF1 code to TF2. For full code, you may check here lines [155-176]. There is a line in TF1 that gets gradients given a loss (float value) and a (m, n) tensor

Note: the TF2 code should be compatible and should work inside a tf.function

g = tf.gradients(-loss, f)  # loss being a float and f being a (m, n) tensor
k = -f_pol / (f + eps)  # f_pol another (m, n) tensor and eps a float
k_dot_g = tf.reduce_sum(k * g, axis=-1)
adj = tf.maximum(
    0.0,
    (tf.reduce_sum(k * g, axis=-1) - delta)
    / (tf.reduce_sum(tf.square(k), axis=-1) + eps),
)
g = g - tf.reshape(adj, [nenvs * nsteps, 1]) * k
grads_f = -g / (nenvs * nsteps)
grads_policy = tf.gradients(f, params, grads_f)  # params being the model parameters

In TF2 code I'm trying:

with tf.GradientTape() as tape:
    f = calculate_f()
    f_pol = calculate_f_pol()
    others = do_further_calculations()
    loss = calculate_loss()
g = tape.gradient(-loss, f)

However I keep getting g = [None] whether I use tape.watch(f) or create a tf.Variable with the value of f or even use tf.gradients() inside a tf.function because otherwise, it will complain.



from Conversion from tf.gradients() to tf.GradientTape() returns None

No comments:

Post a Comment