赞
踩
今天在魔搭上把千问调优跑通了,训练模型现在在 Mac 还不支持,需要用 N 卡才可以,只能弄个N 卡的机器,或者买个云服务器。魔搭可以用几十个小时,但是不太稳定,有的时候会自动停止。
直接手机号注册就可以.
这步可能不需要,随便一个模型,只要启动了 GPU 环境就可以,如果手里有代码,直接启动环境即可。进入模型说明页,通常会有一个测试代码把代码放到 notebook 直接运行接就可以看到结果。我用了Qwen一个最小的模型 0.5B,代码和运行结果如下:
from modelscope import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat") prompt = "你好,什么是 Java?" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response)
调优模型需要几步,首先,需要准备数据,我这里就是测试一下,所以就直接用了 LLama Factory 的例子。然后,配置命令行参数进行模型训练。
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -r requirements.txt
pip install modelscope -U
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \ --stage sft \ --do_train \ --model_name_or_path Qwen/Qwen1.5-0.5B-Chat \ --dataset example \ --template qwen \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir output\ --overwrite_cache \ --overwrite_output_dir true \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 32 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 1000 \ --learning_rate 5e-5 \ --num_train_epochs 3.0 \ --plot_loss \ --fp16
CUDA_VISIBLE_DEVICES=0 python src/export_model.py \
--model_name_or_path Qwen/Qwen1.5-0.5B-Chat\
--adapter_name_or_path output \
--template qwen \
--finetuning_type lora \
--export_dir Qwen1.5-0.5B-Chat_fine \
--export_size 2 \
--export_legacy_format False
from modelscope import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "/mnt/workspace/LLaMA-Factory/Qwen1.5-0.5B-Chat_fine", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("/mnt/workspace/LLaMA-Factory/Qwen1.5-0.5B-Chat_fine") prompt = "你好,纽约天怎么样?" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response)
现在各种开源框架很多,训练起来不复杂,但是如果想训练一个可用的生产模型,还是要花一些时间的,可以比较一下训练前和训练后,模型对纽约天气的回答,大概率出现幻觉。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。