赞
踩
硬件环境
操作系统
编译环境(可选)
cd /llama.cpp
make
(base) llama@llama-3090:~/AI/llama.cpp$ make I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native I LDFLAGS: I CC: cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 I CXX: g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -c ggml.c -o ggml.o ggml.c: In function ‘bytes_from_nibbles_32’: ggml.c:534:27: warning: implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Wimplicit-function-declaration] const __m256i bytes = _mm256_set_m128i(_mm_srli_epi16(tmp, 4), tmp); ^~~~~~~~~~~~~~~~ _mm256_set_epi8 ggml.c:534:27: error: incompatible types when initializing type ‘__m256i {aka const __vector(4) long long int}’ using type ‘int’ Makefile:186: recipe for target 'ggml.o' failed make: *** [ggml.o] Error 1
通过更新Ubuntu的gcc,g++版本后,make进行编译;
torch==1.13.1
peft==0.3.0dev
transformers==4.28.1
sentencepiece==0.1.97
安装CUDA,CUDA == 11.7;
sudo apt-get remove cuda
sudo apt autoremove
sudo apt-get remove cuda*
cd /usr/local/
sudo rm -r cuda-11.x x代表版本
sudo dpkg -l |grep cuda 这个只能卸载 dpkg
然后使用 nvcc -V 仍然看到 cuda 版本
接着使用:
sudo apt-get --purge remove "*cublas*" "*cufft*" "*curand*" "*cusolver*" "*cusparse*" "*npp*" "*nvjpeg*" "cuda*" "nsight*"
就完全卸载好了,再用 nvcc -V 没有版本了
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda-repo-ubuntu2004-11-4-local_11.4.0-470.42.01-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-4-local_11.4.0-470.42.01-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-4-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
3.验证安装
nvcc -V #查看是否输出预期的版本.
链接: https://pan.baidu.com/s/1Q6Lbw6kDR3dig73Q5uTnNg?pwd=vhdg 提取码: vhdg
链接: https://pan.baidu.com/s/17QGtgKYrd5dF1XGBm9xmDg?pwd=sbqi 提取码: sbqi
链接: https://pan.baidu.com/s/1WD2vU8P-CLq_0UjVuNnLoA?pwd=v37s 提取码: v37s
1.克隆ymcui的代码包;
git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca/
2.确保机器有足够的内存加载完整模型(13B模型需要32G以上)以进行合并模型操作。
3.确认原版LLaMA模型和下载的LoRA模型完整性,检查是否与SHA256.md所示的值一致,否则无法进行合并操作。原版LLaMA包含:tokenizer.model、tokenizer_checklist.chk、consolidated.*.pth、params.json;
4.主要依赖库如下(如果出问题就请安装以下指定版本):
torch(1.10,1.12,1.13测试通过)
transformers(4.28.1测试通过)
sentencepiece(0.1.97测试通过)
peft(0.3.0dev测试通过)-->dev版本如果找不到,可以用0.3.0版本;
python版本建议在3.9以上
pip install torch==1.13.1
pip install transformers
pip install sentencepiece
pip install git+https://github.com/huggingface/peft
5.将原版LLaMA模型转换为huggingface格式
1.使用声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。