我已经使用 Google Colab 中的自定义乳房 X 线照片数据集训练了 StyleGAN 2 ADA 的 ffqh1024 模型(链接到他们的仓库)。我训练的模型 .pkl 文件已在驱动器文件夹中准备就绪,我想使用该 .pkl 文件生成图像。我试过了:
!python generate.py --outdir='/content/drive/MyDrive/TFM/Generated' --trunc=1 --seeds=85,265,297,849 \ --network='/content/drive/MyDrive/TFM/colab-sg2-ada/stylegan2-ada/results/00025-ddsm-auto1-bg-resumecustom/network-snapshot-000096.pkl'
按照 GitHub 上的建议,但我收到此错误:
usage: generate.py [-h] {generate-images,truncation-traversal,generate-latent-walk,generate-neighbors,lerp-video} ... generate.py: error: unrecognized arguments: --outdir='/content/drive/MyDrive/TFM/Generated' --trunc=1 --seeds=85,265,297,849 --network='/content/drive/MyDrive/TFM/colab-sg2-ada/stylegan2-ada/results/00025-ddsm-auto1-bg-resumecustom/network-snapshot-000096.pkl'
真的不知道为什么 generate.py 无法识别参数...我必须 !pip install opensimplex 才能运行 generate.py,不知道它是否与问题有关...
在 StyleGAN 2 ADA 存储库中,有使用经过训练的模型生成图像的示例:
# Generate curated MetFaces images without truncation (Fig.10 left)
python generate.py --outdir=out --trunc=1 --seeds=85,265,297,849 \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl
这是执行 !python generate.py -h 的结果:
usage: generate.py [-h]
{generate-images,truncation-traversal,generate-latent-walk,generate-neighbors,lerp-video}
...
Generate images using pretrained network pickle.
positional arguments:
{generate-images,truncation-traversal,generate-latent-walk,generate-neighbors,lerp-video}
Sub-commands
generate-images Generate images
truncation-traversal
Generate truncation walk
generate-latent-walk
Generate latent walk
generate-neighbors Generate random neighbors of a seed
lerp-video Generate interpolation video (lerp) between random
vectors
optional arguments:
-h, --help show this help message and exit
examples:
# Generate curated MetFaces images without truncation (Fig.10 left)
python generate.py --outdir=out --trunc=1 --seeds=85,265,297,849 \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl
# Generate uncurated MetFaces images with truncation (Fig.12 upper left)
python generate.py --outdir=out --trunc=0.7 --seeds=600-605 \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl
# Generate class conditional CIFAR-10 images (Fig.17 left, Car)
python generate.py --outdir=out --trunc=1 --seeds=0-35 --class=1 \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/cifar10.pkl
# Render image from projected latent vector
python generate.py --outdir=out --dlatents=out/dlatents.npz \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl