当前位置:   article > 正文

tr(self, key, value)AttributeError: property ‘eos_token‘ of ‘ChatGLMTokenizer‘ object has no_output is truncated. view as a scrollable element

output is truncated. view as a scrollable element or open in a text edit

复制原模型的tokenizer_config.json文件到新模型下,覆盖新模型的这个文件

而后报错:

def load_model_and_tokenizer(model_dir: Union[str, Path]) -> tuple[Mode │ │ 28 │ model_dir = _resolve_path(model_dir) │ │ 29 │ if (model_dir / 'adapter_config.json').exists(): │ │ ❱ 30 │ │ model = AutoPeftModelForCausalLM.from_pretrained( │ │ 31 │ │ │ model_dir, trust_remote_code=True, device_map='auto' │ │ 32 │ │ ) │ │ 33 │ │ tokenizer_dir = model.peft_config['default'].base_model_name_or │ │ │ │ /home/dell/anaconda3/envs/llm3/lib/python3.11/site-packages/peft/auto.py:126 │ │ in from_pretrained

解决方法:

peft-0.7.1

而后又报错:

) │ │ 33 │ │ tokenizer_dir = model.peft_config['default'].base_model_name_or │ │ │ │ /home/dell/anaconda3/envs/llm3/lib/python3.11/site-packages/peft/auto.py:69 │ │ in from_pretrained │ │ │

...

│ 139 │ │ │ 140 │ @classmethod │ ╰──────────────────────────────────────────────────────────────────────────────╯ TypeError: LoraConfig.__init__() got an unexpected keyword argument 'use_dora'

Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

解决思路:

peft 包: 训练时与推理时版本不一致

peft->0.7.1 ,然后训练,再推理试试看

另外:

    # training_args: Seq2SeqTrainingArguments = dc.field(

    #     default=Seq2SeqTrainingArguments(output_dir='./output')

    # )

代码改为如下代码: 

    training_args: Seq2SeqTrainingArguments = dc.field(

        default_factory=Seq2SeqTrainingArguments

    )

结果:pass ,ok

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/370593
推荐阅读
相关标签
  

闽ICP备14008679号