2

我无法从 TensorFlow Hub 导入 elmo 模块。我能够导入其他模块并成功使用它们。我在带有 GPU 的 GCP Jupyterlab 实例上运行 TF2.0。当我尝试这个时:

import tensorflow as tf
import tensorflow_hub as hub

elmo = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True)

我得到:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-2-caced7ee1735> in <module>
----> 1 elmo = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True)

/usr/local/lib/python3.5/dist-packages/tensorflow_hub/module.py in __init__(self, spec, trainable, name, tags)
    174           name=self._name,
    175           trainable=self._trainable,
--> 176           tags=self._tags)
    177       # pylint: enable=protected-access
    178 

/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags)
    384         trainable=trainable,
    385         checkpoint_path=self._checkpoint_variables_path,
--> 386         name=name)
    387 
    388   def _export(self, path, variables_saver):

/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in __init__(self, spec, meta_graph, trainable, checkpoint_path, name)
    443     # TPU training code.
    444     with scope_func():
--> 445       self._init_state(name)
    446 
    447   def _init_state(self, name):

/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _init_state(self, name)
    446 
    447   def _init_state(self, name):
--> 448     variable_tensor_map, self._state_map = self._create_state_graph(name)
    449     self._variable_map = recover_partitioned_variable_map(
    450         get_node_map_from_tensor_map(variable_tensor_map))

/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _create_state_graph(self, name)
    503         meta_graph,
    504         input_map={},
--> 505         import_scope=relative_scope_name)
    506 
    507     # Build a list from the variable name in the module definition to the actual

/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs)
   1451   return _import_meta_graph_with_return_elements(meta_graph_or_file,
   1452                                                  clear_devices, import_scope,
-> 1453                                                  **kwargs)[0]
   1454 
   1455 

/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs)
   1461   """Import MetaGraph, and return both a saver and returned elements."""
   1462   if context.executing_eagerly():
-> 1463     raise RuntimeError("Exporting/importing meta graphs is not supported when "
   1464                        "eager execution is enabled. No graph exists when eager "
   1465                        "execution is enabled.")

RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled.
4

2 回答 2

2

API 在 Eager 模式下hub.Module不起作用。请查看https://www.tensorflow.org/hub/migration_tf2,并注意文件格式从 TF1 hub.Module 到 TF2 SavedModel 的变化。

于 2020-03-11T12:33:37.623 回答
0
handle = "https://tfhub.dev/google/elmo/3"

如果您想使用TF 2进行加载/推理,建议使用以下 2 种方法:hub.load()是从 TensorFlow Hub(或兼容服务)加载 SavedModel 的新低级函数。它包装了 TF2tf.saved_model.load()

model = hub.load(handle)
outputs = model(inputs)

或者

model = hub.KerasLayer(handle, signature="sig")
outputs = model(inputs)

该类hub.KerasLayer调用hub.load()并调整结果,以便与其他 Keras 层一起在 Keras 中使用。(它甚至可能是其他方式使用的已加载 SavedModel 的便捷包装器。)

model = tf.keras.Sequential([
    hub.KerasLayer(handle),
    ...])

但如果你想微调:以 TF1 Hub 格式加载旧模型

import tensorflow.compat.v1 as tf
import tensorflow_hub as hub    
tf.disable_v2_behavior()

elmo = hub.Module(handle, trainable=True)

来源:https ://www.tensorflow.org/hub/model_compatibility

于 2021-11-05T08:54:57.473 回答