9

我有一个使用新的 tensorflow 2.0 和混合 keras 和 tensorflow 制作的大型自定义模型。我想保存它(架构和权重)。重现的确切命令:

import tensorflow as tf


OUTPUT_CHANNELS = 3

def downsample(filters, size, apply_batchnorm=True):
  initializer = tf.random_normal_initializer(0., 0.02)

  result = tf.keras.Sequential()
  result.add(
      tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
                             kernel_initializer=initializer, use_bias=False))

  if apply_batchnorm:
    result.add(tf.keras.layers.BatchNormalization())

  result.add(tf.keras.layers.LeakyReLU())

  return result

def upsample(filters, size, apply_dropout=False):
  initializer = tf.random_normal_initializer(0., 0.02)

  result = tf.keras.Sequential()
  result.add(
    tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
                                    padding='same',
                                    kernel_initializer=initializer,
                                    use_bias=False))

  result.add(tf.keras.layers.BatchNormalization())

  if apply_dropout:
      result.add(tf.keras.layers.Dropout(0.5))

  result.add(tf.keras.layers.ReLU())

  return result


def Generator():
  down_stack = [
    downsample(64, 4, apply_batchnorm=False), # (bs, 128, 128, 64)
    downsample(128, 4), # (bs, 64, 64, 128)
    downsample(256, 4), # (bs, 32, 32, 256)
    downsample(512, 4), # (bs, 16, 16, 512)
    downsample(512, 4), # (bs, 8, 8, 512)
    downsample(512, 4), # (bs, 4, 4, 512)
    downsample(512, 4), # (bs, 2, 2, 512)
    downsample(512, 4), # (bs, 1, 1, 512)
  ]

  up_stack = [
    upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024)
    upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024)
    upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024)
    upsample(512, 4), # (bs, 16, 16, 1024)
    upsample(256, 4), # (bs, 32, 32, 512)
    upsample(128, 4), # (bs, 64, 64, 256)
    upsample(64, 4), # (bs, 128, 128, 128)
  ]

  initializer = tf.random_normal_initializer(0., 0.02)
  last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
                                         strides=2,
                                         padding='same',
                                         kernel_initializer=initializer,
                                         activation='tanh') # (bs, 256, 256, 3)

  concat = tf.keras.layers.Concatenate()

  inputs = tf.keras.layers.Input(shape=[None,None,3])
  x = inputs

  # Downsampling through the model
  skips = []
  for down in down_stack:
    x = down(x)
    skips.append(x)

  skips = reversed(skips[:-1])

  # Upsampling and establishing the skip connections
  for up, skip in zip(up_stack, skips):
    x = up(x)
    x = concat([x, skip])

  x = last(x)

  return tf.keras.Model(inputs=inputs, outputs=x)

generator = Generator()
generator.summary()

generator.save('generator.h5')
generator_loaded = tf.keras.models.load_model('generator.h5')

我设法保存模型:

generator.save('generator.h5')

但是当我尝试加载它时:

generator_loaded = tf.keras.models.load_model('generator.h5')

它永远不会结束(没有错误消息)。也许模型太大了?我尝试使用model.to_json()完整的 API保存为 JSON tf.keras.models.save_model(),但同样的问题,无法加载它(或者至少太长了)。

在 Windows/Linux 和有/没有 GPU 上存在同样的问题。

保存和恢复适用于完整的 Keras 和简单的模型。

编辑

4

5 回答 5

1

从 tensorflow 2.0.0 版开始,现在有一种与 keras / tf 无关的方法来使用 tf.saved_model 保存模型

        ....

        model.fit(images, labels , epochs=30, validation_data=(images_val, labels_val), verbose=1)

        tf.saved_model.save( model, "path/to/model_dir" )

然后你可以加载

        loaded_model = tf.saved_model.load("path/to/model_dir")
于 2019-10-13T12:21:14.613 回答
0

通过在 Keras 中实现与 Sequential 模型类似的功能,我设法保存和加载自定义模型。

关键函数是CustomModel.get_config() CustomModel.from_config(),它也应该存在于您的任何自定义层上(类似于下面的函数,但如果您想更好地理解,请参阅 keras 层):

# In the CustomModel class    
def get_config(self):
    layer_configs = []
    for layer in self.layers:
        layer_configs.append({
            'class_name': layer.__class__.__name__,
            'config': layer.get_config()
        })
    config = {
        'name': self.name,
        'layers': copy.deepcopy(layer_configs),
        "arg1": self.arg1,
        ...
    }
    if self._build_input_shape:
        config['build_input_shape'] = self._build_input_shape
    return config

@classmethod
def from_config(cls, config, custom_objects=None):
    from tensorflow.python.keras import layers as layer_module
    if custom_objects is None:
        custom_objects = {'CustomLayer1Class': CustomLayer1Class, ...}
    else:
        custom_objects = dict(custom_objects, **{'CustomLayer1Class': CustomLayer1Class, ...})

    if 'name' in config:
        name = config['name']
        build_input_shape = config.get('build_input_shape')
        layer_configs = config['layers']
    else:
        name = None
        build_input_shape = None
        layer_configs = config
    model = cls(name=name,
                arg1=config['arg1'],
                should_build_graph=False,
                ...)
    for layer_config in tqdm(layer_configs, 'Loading Layers'):
        layer = layer_module.deserialize(layer_config,
                                         custom_objects=custom_objects)
        model.add(layer) # This function looks at the name of the layers to place them in the right order
    if not model.inputs and build_input_shape:
        model.build(build_input_shape)
    if not model._is_graph_network:
        # Still needs to be built when passed input data.
        model.built = False
    return model

我还添加了一个CustomModel.add()功能,可以从他们的配置中逐层添加。也是一个确保您在调用时should_build_graph=False不构建图形的参数。__init__()cls()

然后CustomModel.save()函数如下所示:

    def save(self, filepath, overwrite=True, include_optimizer=True, **kwargs):
        from tensorflow.python.keras.models import save_model  
        save_model(self, filepath, overwrite, include_optimizer)

之后,您可以使用以下方法保存:

model.save("model.h5")
new_model = keras.models.load_model('model.h5',
                                        custom_objects={
                                        'CustomModel': CustomModel,                                                     
                                        'CustomLayer1Class': CustomLayer1Class,
                                        ...
                                        })

但不知何故,这种方法似乎很慢……另一方面,这种方法快了近 30 倍。不知道为什么:

    model.save_weights("weights.h5")
    config = model.get_config()
    reinitialized_model = CustomModel.from_config(config)
    reinitialized_model.load_weights("weights.h5")

我工作,但它似乎很hacky。也许未来的 TF2 版本会让这个过程更加清晰。

于 2019-05-17T17:41:58.613 回答
0

尝试将模型另存为:

model.save('model_name.model')

然后加载它:

model = tf.keras.models.load_model('model_name.model')
于 2019-04-30T10:16:26.473 回答
0

我找到了一个临时解决方案。似乎问题发生在顺序 APItf.keras.Sequential上,通过使用功能 API,tf.keras.models.load_model设法加载保存的模型。我希望他们能在最终版本中解决这个问题,看看我在 github https://github.com/tensorflow/tensorflow/issues/28281中提出的问题。

干杯,

于 2019-05-07T08:12:50.220 回答
-1

保存训练模型的另一种方法是pickle在 python 中使用该模块。

import pickle
pickle.dump(model, open(filename, 'wb'))

为了加载pickled模型,

loaded_model = pickle.load(open(filename, 'rb'))

pickle 文件的扩展名通常是.sav

于 2019-04-30T11:43:40.363 回答