当前位置:   article > 正文

LLAMA3==shenzhi-wang/Llama3-8B-Chinese-Chat。windows安装不使用ollama

llama3-8b-chinese-chat

创建环境:

conda create -n llama3_env python=3.10
conda activate llama3_env
conda install pytorch torchvision torchaudio cudatoolkit=11.7 -c pytorch

安装Hugging Face的Transformers库:

pip install transformers sentencepiece
下载模型

https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/main

编写代码调用

  1. import torch
  2. from transformers import AutoModelForCausalLM, AutoTokenizer
  3. # 检查CUDA是否可用,并设置设备
  4. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  5. print(torch.cuda.is_available())
  6. print(device)
  7. # 加载模型和tokenizer
  8. model_name = "F:\\ollama_models\\Llama3-8B-Chinese-Chat"
  9. model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
  10. tokenizer = AutoTokenizer.from_pretrained(model_name)
  11. # 编写推理函数
  12. # def generate_text(prompt):
  13. # inputs = tokenizer(prompt, return_tensors="pt").to(device)
  14. # outputs = model.generate(inputs['input_ids'], max_length=100)
  15. # return tokenizer.decode(outputs[0], skip_special_tokens=True)
  16. #
  17. # # 示例使用
  18. # prompt = "写一首诗吧,以春天为主题"
  19. # print(generate_text(prompt))
  20. messages = [
  21. {"role": "user", "content": "写一首诗吧"},
  22. ]
  23. input_ids = tokenizer.apply_chat_template(
  24. messages, add_generation_prompt=True, return_tensors="pt"
  25. ).to(model.device)
  26. outputs = model.generate(
  27. input_ids,
  28. max_new_tokens=8192,
  29. do_sample=True,
  30. temperature=0.6,
  31. top_p=0.9,
  32. )
  33. response = outputs[0][input_ids.shape[-1]:]
  34. print(tokenizer.decode(response, skip_special_tokens=True))

非常慢,大概用了一两分钟回答一个问题。

还是老实用ollama跑qwen吧

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/900490
推荐阅读
相关标签
  

闽ICP备14008679号