当前位置:   article > 正文

深度学习笔记--本地部署Mini-GPT4_minigpt4

minigpt4

目录

1--前言

2--配置环境依赖

3--下载权重

4--生成 Vicuna 权重

5--测试

6--可能出现的问题


1--前言

本机环境:

        System: Ubuntu 18.04

        GPU:  Tesla V100 (32G)

        CUDA: 10.0(11.3 both ok)

 项目地址:https://github.com/Vision-CAIR/MiniGPT-4

2--配置环境依赖

  1. git clone https://github.com/Vision-CAIR/MiniGPT-4.git
  2. cd MiniGPT-4
  3. conda env create -f environment.yml
  4. conda activate minigpt4

        默认配置的环境名为 minigpt4,也可以通过 environment.yml 来修改环境名,这里博主设置的 python 环境为 ljf_minigpt4:

3--下载权重

        这里博主选用的 LLaMA 权重为 llama-7b-hf,Vicuna 增量文件为 vicuna-7b-delta-v1.1,对应的下载地址如下:

LLaMA 权重下载地址

Vicuna 增量文件下载地址

        这里提供两种下载方式,第一种是基于 huggingface_hub 第三方库进行下载:

pip install huggingface_hub
  1. from huggingface_hub import snapshot_download
  2. snapshot_download(repo_id='decapoda-research/llama-7b-hf')
  3. # 对应的存储地址为:~/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/(一串数字)/
  4. from huggingface_hub import snapshot_download
  5. snapshot_download(repo_id='lmsys/vicuna-7b-delta-v1.1')
  6. # 对应的存储地址为:~.cache/huggingface/hub/models--lmsys--vicuna-7b-delta-v1.1/snapshots/(一串数字)/

        第一种下载方式容易出现连接超时的错误,这里提供第二种基于 wget 的下载方式:

  1. # 记录每一个文件的下载url,使用 wget 来下载,download.txt 的内容如下:
  2. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/.gitattributes
  3. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/LICENSE
  4. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/README.md
  5. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/config.json
  6. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/generation_config.json
  7. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00001-of-00033.bin
  8. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00002-of-00033.bin
  9. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00003-of-00033.bin
  10. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00004-of-00033.bin
  11. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00005-of-00033.bin
  12. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00006-of-00033.bin
  13. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00007-of-00033.bin
  14. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00008-of-00033.bin
  15. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00009-of-00033.bin
  16. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00010-of-00033.bin
  17. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00011-of-00033.bin
  18. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00012-of-00033.bin
  19. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00013-of-00033.bin
  20. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00014-of-00033.bin
  21. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00015-of-00033.bin
  22. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00016-of-00033.bin
  23. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00017-of-00033.bin
  24. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00018-of-00033.bin
  25. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00019-of-00033.bin
  26. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00020-of-00033.bin
  27. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00021-of-00033.bin
  28. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00022-of-00033.bin
  29. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00023-of-00033.bin
  30. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00024-of-00033.bin
  31. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00025-of-00033.bin
  32. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00026-of-00033.bin
  33. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00027-of-00033.bin
  34. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00028-of-00033.bin
  35. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00029-of-00033.bin
  36. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00030-of-00033.bin
  37. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00031-of-00033.bin
  38. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00032-of-00033.bin
  39. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model-00033-of-00033.bin
  40. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/pytorch_model.bin.index.json
  41. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/special_tokens_map.json
  42. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer.model
  43. https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json

        编写 download.sh,下载 download.txt 中所有的文件:

  1. #! /bin/bash
  2. while read file; do
  3. wget ${file}
  4. done < download.txt

        同理对于增量文件,记录所有文件的下载 url,通过 wget 下载:

  1. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/.gitattributes
  2. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/README.md
  3. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/config.json
  4. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/generation_config.json
  5. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00001-of-00002.bin
  6. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model-00002-of-00002.bin
  7. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/pytorch_model.bin.index.json
  8. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/special_tokens_map.json
  9. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer.model
  10. https://huggingface.co/lmsys/vicuna-7b-delta-v1.1/resolve/main/tokenizer_config.json
  1. #! /bin/bash
  2. while read file; do
  3. wget ${file}
  4. done < download.txt

4--生成 Vicuna 权重

        安装 FastChat 的 python 库:

  1. pip install git+https://github.com/lm-sys/FastChat.git@v0.1.10
  2. # or
  3. pip install fschat

        终端执行以下命令,生成最终的权重文件:

  1. python3 -m fastchat.model.apply_delta \
  2. --base-model-path llama-7b-hf_path \
  3. --target-model-path vicuna-7b_path \
  4. --delta-path vicuna-7b-delta-v1.1_path

--bash-model-path 表示第 3 步中下载的 llama-7b-hf 权重的存放地址;

--target-model-path 表示生成 Vicuna 权重的存放地址;

--delta-path 表示第 3 步中下载的 vicuna-7b-delta-v1.1 权重的存放地址;

5--测试

        首先下载测试权重,这里博主选用的是 Checkpoint Aligned with Vicuna 7B:

Checkpoint Aligned with Vicuna 13B 下载地址

Checkpoint Aligned with Vicuna 7B 下载地址

        接着配置测试文件:

        修改 MiniGPT-4/eval_configs/minigpt4_eval.yaml 配置文件中的 ckpt 路径为 Checkpoint Aligned with Vicuna 7B 的地址:

        修改 MiniGPT-4/minigpt4/configs/models/minigpt4.yaml 配置文件中的 llama_model 为第4步生成 Vicuna 权重的存放地址:

        执行以下命令启动网页端的 Mini-GPT4:

python demo.py --cfg-path eval_configs/minigpt4_eval.yaml  --gpu-id 0

        在本机打开 local url 即可,也可以使用其它电脑打开 public url:当然也可以修改提供的demo,无需使用网页端直接在终端测试推理结果:

6--可能出现的问题

① name ‘cuda_setup’ is not defined: 

解决方法:

        升级 bitsandbytes 库,这里博主选用 0.38.1 版本的 bitsandbytes 解决了问题;

pip install bitsandbytes==0.38.1

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/342779
推荐阅读
相关标签
  

闽ICP备14008679号