赞
踩
Pytorch训练模型fine-tunning、模型推理等环节常常涉及到模型加载,其中会涉及到将不同平台、版本的模型相互转化:
- pretained_model = torch.load(’muti_gpus_model.pth‘) # 网络+权重
- # 载入为单gpu模型
- gpu_model = pretrained_model.module # GPU-version
- # 载入为cpu模型
- model = ModelArch()
- pretained_dict = pretained_model.module.state_dict()
- model.load_state_dict(pretained_dict) # CPU-version
- model = ModelArch(para).cuda(0) # 网络结构
- model = torch.nn.DataParallel(model, device_ids=[0]) # 将model转为muit-gpus模式
- checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage) # 载入weights
- model.load_state_dict(checkpoint) # 用weights初始化网络
- # 载入为单gpu模型
- gpu_model = model.module # GPU-version
- # 载入为cpu模型
- model = ModelArch(para)
- model.load_state_dict(gpu_model.state_dict())
- torch.save(cpu_model.state_dict(), 'cpu_mode.pth') # cpu模型存储, 注意这里的state_dict后的()必须加上,否则报'function' object has no attribute 'copy'错误
- # 载入为cpu版本
- model = ModelArch(para)
- checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage) # 载入weights
-
- # 载入为gpu版本
- model = ModelArch(para).cuda() # 网络结构
- checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage.cuda(0)) # 载入weights
- model.load_state_dict(checkpoint) # 用weights初始化网络
-
- # 载入为muti-gpus版本
- model = ModelArch(para).cuda() # 网络结构
- model = torch.nn.DataParallel(model, device_ids=[0, 1]) # device_ids根据自己需求改!
- checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage.cuda(0)) # 载入weights
- model.module.load_state_dict(checkpoint) # 用weights初始化网络
赞
踩
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。