当前位置:   article > 正文

【AI绘画】fal/AuraFlow-v0.2出现 delete the irrelevant ones 错误

【AI绘画】fal/AuraFlow-v0.2出现 delete the irrelevant ones 错误

由于AuraFlow模型比较大,我就下在本地/hf_hub,结果运行Huggingface上README.md的代码

from diffusers import AuraFlowPipeline
import torch

pipeline = AuraFlowPipeline.from_pretrained(
    "/hf_hub/fal/AuraFlow-v0.2",
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

image = pipeline(
    prompt="close-up portrait of a majestic iguana with vibrant blue-green scales, piercing amber eyes, and orange spiky crest. Intricate textures and details visible on scaly skin. Wrapped in dark hood, giving regal appearance. Dramatic lighting against black background. Hyper-realistic, high-resolution image showcasing the reptile's expressive features and coloration.",
    height=1024,
    width=1024,
    num_inference_steps=50, 
    generator=torch.Generator().manual_seed(666),
    guidance_scale=3.5,
).images[0]

image
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

ValueError: /hf_hub/fal/AuraFlow-v0.2/transformer/ containing more than one .index.json file, delete the irrelevant ones.

网上并没有资料,我只好做了些尝试,发现是transformer文件夹下index.json重合了。我就用link做了测试

解决方法

删除其中一个index.json。可参考我的目录

fal/
├── AuraFlow-v0.2
│   ├── aura_flow_0.2.safetensors -> /hf_hub/fal/AuraFlow-v0.2/aura_flow_0.2.safetensors
│   ├── model_index.json -> /hf_hub/fal/AuraFlow-v0.2/model_index.json
│   ├── scheduler -> /hf_hub/fal/AuraFlow-v0.2/scheduler
│   ├── text_encoder -> /hf_hub/fal/AuraFlow-v0.2/text_encoder/
│   ├── tokenizer -> /hf_hub/fal/AuraFlow-v0.2/tokenizer/
│   ├── transformer
│   │   ├── config.json -> /hf_hub/fal/AuraFlow-v0.2/transformer/config.json
│   │   ├── diffusion_pytorch_model-00001-of-00003.safetensors -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model-00001-of-00003.safetensors
│   │   ├── diffusion_pytorch_model-00002-of-00003.safetensors -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model-00002-of-00003.safetensors
│   │   ├── diffusion_pytorch_model-00003-of-00003.safetensors -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model-00003-of-00003.safetensors
│   │   └── diffusion_pytorch_model.safetensors.index.json -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model.safetensors.index.json
│   └── vae -> /hf_hub/fal/AuraFlow-v0.2/vae/
└── AuraFlow-v0.2-fp16
    ├── aura_flow_0.2.safetensors -> /hf_hub/fal/AuraFlow-v0.2/aura_flow_0.2.safetensors
    ├── model_index.json -> /hf_hub/fal/AuraFlow-v0.2/model_index.json
    ├── scheduler -> /hf_hub/fal/AuraFlow-v0.2/scheduler
    ├── text_encoder -> /hf_hub/fal/AuraFlow-v0.2/text_encoder/
    ├── tokenizer -> /hf_hub/fal/AuraFlow-v0.2/tokenizer/
    ├── transformer
    │   ├── config.json -> /hf_hub/fal/AuraFlow-v0.2/transformer/config.json
    │   ├── diffusion_pytorch_model-00001-of-00002.fp16.safetensors -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model-00001-of-00002.fp16.safetensors
    │   ├── diffusion_pytorch_model-00002-of-00002.fp16.safetensors -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model-00002-of-00002.fp16.safetensors
    │   └── diffusion_pytorch_model.safetensors.fp16.index.json -> /hf_hub/fal/AuraFlow-v0.2/transformer/diffusion_pytorch_model.safetensors.fp16.index.json
    └── vae -> /hf_hub/fal/AuraFlow-v0.2/vae/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

fp16

Fri Aug  2 19:48:23 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.58.02              Driver Version: 555.58.02      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090        Off |   00000000:01:00.0 Off |                  Off |
| 34%   58C    P0            407W /  515W |   18542MiB /  24564MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1124      G   /usr/lib/Xorg                                 167MiB |
|    0   N/A  N/A      1189      G   /usr/bin/sddm-greeter-qt6                     146MiB |
|    0   N/A  N/A      3878      C   ...conda3/envs/ai-train/bin/python3.10      18196MiB |
+-----------------------------------------------------------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

无fp16

Fri Aug  2 19:50:18 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.58.02              Driver Version: 555.58.02      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090        Off |   00000000:01:00.0 Off |                  Off |
| 36%   44C    P0             71W /  515W |   20834MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1124      G   /usr/lib/Xorg                                 167MiB |
|    0   N/A  N/A      1189      G   /usr/bin/sddm-greeter-qt6                     146MiB |
|    0   N/A  N/A      4031      C   ...conda3/envs/ai-train/bin/python3.10      20488MiB |
+-----------------------------------------------------------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/爱喝兽奶帝天荒/article/detail/926852
推荐阅读
相关标签
  

闽ICP备14008679号