3

我对计算梯度很感兴趣。Tensorflow 中 keras 模型的输入。我知道以前可以通过构建图表并使用tf.gradients. 例如这里。但是,我想在急切模式下进行试验时(可能使用GradientTape)来实现这一点。具体来说,如果我的网络有两个输入(x, y),并预测(u, v, p)计算,例如,du/dx用于损失。

下面的代码片段,完整的代码在这个要点

model = tf.keras.Sequential([
    tf.keras.layers.Dense(20, activation=tf.nn.relu, input_shape=(2,)),  # input shape required
    tf.keras.layers.Dense(20, activation=tf.nn.relu),
    tf.keras.layers.Dense(20, activation=tf.nn.relu),
    tf.keras.layers.Dense(20, activation=tf.nn.relu),
    tf.keras.layers.Dense(3)
])

def loss(model: tf.keras.Model, inputs, outputs):

    u_true, v_true = outputs[:, 0], outputs[:, 1]

    prediction = model(inputs)
    u_pred, v_pred = prediction[:, 0], prediction[:, 1]

    loss_value = tf.reduce_mean(tf.square(u_true - u_pred)) + \
                 tf.reduce_mean(tf.square(v_true - v_pred))

    return loss_value, u_pred, v_pred

def grad(model: tf.keras.Model, inputs, outputs):
    """
    :param inputs:  (batch_size, 2) -> x, y
    :param outputs: (batch_size, 3) -> vx, vy, p
    :return:
    """
    with tf.GradientTape() as tape:

        loss_value, u_pred, v_pred = loss(model, inputs, outputs)
        # AttributeError: 'DeferredTensor' object has no attribute '_id'
        print(tape.gradient(u_pred, model.input))

    grads = tape.gradient(loss_value, model.trainable_variables)

    return loss_value, grads

我尝试了一些东西,例如,tape.gradient(u_pred, model.input)或者tape.gradient(model.output, model.input)但是这些抛出:

AttributeError: 'DeferredTensor' object has no attribute '_id'

有没有办法在急切模式下实现这一点,如果有,怎么做?

4

1 回答 1

3

这是一个使用急切执行检索相对于输入的预测梯度的示例

基本上,您需要使用 tape.watch(inputs) [我在我的示例中使用功能 - 无论您想调用 x ... ] 来让 Tensorflow 记录模型输出中的变化(您可以对 loss 做同样的事情)关于输入......(并确保在 with tf.GradientTape() 上下文之外调用您的 tape.gradient)

看看下面的 get_gradients 函数...

希望这可以帮助!

model = tf.keras.Sequential([
  tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(len(numeric_headers),)),  # input shape required
  tf.keras.layers.Dense(10, activation=tf.nn.relu),
  tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
])


# model = MyModel()
loss_object = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam()

train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

def get_gradients(model, features):
  with tf.GradientTape() as tape:
      tape.watch(features)
      predictions = model(features)
  gradients = tape.gradient(predictions, features)
  return gradients

def train_step(features, label):

  with tf.GradientTape() as tape:
    predictions = model(features)
    loss = loss_object(label, predictions)

  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

  train_loss(loss)
  train_accuracy(label, predictions)

def test_step(features, label):
  predictions = model(features)
  t_loss = loss_object(label, predictions)

  test_loss(t_loss)
  test_accuracy(label, predictions)

EPOCHS = 5
for epoch in range(EPOCHS):
  for features, labels in train_ds:
    train_step(features, labels)

  for features, labels in train_ds:
      test_step(features, labels)

  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
  print (template.format(epoch+1,
                           train_loss.result(), 
                           train_accuracy.result()*100,
                           test_loss.result(), 
                           test_accuracy.result()*100))

  if epoch == EPOCHS - 1:
    for features, labels in train_ds:
      print ('-')
      print (get_gradients(model, features))
于 2019-03-25T20:03:30.217 回答