当前位置:   article > 正文

yolov7自动停止(设置patience)且输出最优模型时的PR图(test best.py)_yolov7早停

yolov7早停

步骤1:在utils文件夹下的torch_utils.py里添加如下class

  1. class EarlyStopping:
  2. # YOLOv5 simple early stopper
  3. def __init__(self, patience=30):
  4. self.best_fitness = 0.0 # i.e. mAP
  5. self.best_epoch = 0
  6. self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop
  7. self.possible_stop = False # possible stop may occur next epoch
  8. def __call__(self, epoch, fitness):
  9. if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training
  10. self.best_epoch = epoch
  11. self.best_fitness = fitness
  12. delta = epoch - self.best_epoch # epochs without improvement
  13. self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch
  14. stop = delta >= self.patience # stop training if patience exceeded
  15. if stop:
  16. logger.info(f'Stopping training early as no improvement observed in last {self.patience} epochs. '
  17. f'Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n'
  18. f'To update EarlyStopping(patience={self.patience}) pass a new patience value, '
  19. f'i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.')
  20. return stop

步骤2:在train.py前面添加模块

from utils.torch_utils import EarlyStopping

步骤3:在train.py文件里大概第三百行start training中scaler变量下面添加如下两行

  1. stopper: EarlyStopping
  2. stopper, stop = EarlyStopping(patience=opt.patience), False

步骤4:在train.py文件里大概450行的位置,#Save model上方加入如下代码(并将原来的#Update best mAP部分删除)

  1. # Update best mAP
  2. fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
  3. stop = stopper(epoch=epoch, fitness=fi) # early stop check
  4. if fi > best_fitness:
  5. best_fitness = fi
  6. wandb_logger.end_epoch(best_result=best_fitness == fi)

步骤5:在train.py文件里大概490行的位置,#end epoch上方加入如下代码

  1. # EarlyStopping
  2. if rank != -1: # if DDP training
  3. broadcast_list = [stop if rank == 0 else None]
  4. dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks
  5. if rank != 0:
  6. stop = broadcast_list[0]
  7. if stop:
  8. break # must break all DDP ranks

步骤6:在train.py文件里大概580行的位置,加入一行

    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')

步骤7:在train.py文件里大概425行的位置,将原先# Calculate mAP的部分更改为

  1. if not opt.notest or final_epoch: # Calculate mAP
  2. wandb_logger.current_epoch = epoch + 1
  3. results, maps, times = test.test(data_dict,
  4. batch_size=batch_size * 2,
  5. imgsz=imgsz_test,
  6. model=ema.ema,
  7. single_cls=opt.single_cls,
  8. dataloader=testloader,
  9. save_dir=save_dir,
  10. verbose=nc < 50 ,
  11. plots=False,
  12. wandb_logger=wandb_logger,
  13. compute_loss=compute_loss,
  14. is_coco=is_coco)

步骤8:在train.py文件里大概500行的位置,将原先# Test best.pt的部分更改为

  1. # Test best.pt
  2. logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
  3. results, _, _ = test.test(opt.data,
  4. batch_size=batch_size * 2,
  5. imgsz=imgsz_test,
  6. conf_thres=0.001,
  7. iou_thres=0.7,
  8. model=attempt_load(best, device).half(),
  9. single_cls=opt.single_cls,
  10. dataloader=testloader,
  11. verbose=nc < 50,
  12. save_dir=save_dir,
  13. save_json=True,
  14. plots=plots,
  15. wandb_logger=wandb_logger,
  16. compute_loss=compute_loss,
  17. is_coco=is_coco)

步骤9:在test.py(注意换文件啦!)里大概293行的地方将iou-thres的默认值更改为0.7

    parser.add_argument('--iou-thres', type=float, default=0.7, help='IOU threshold for NMS')

在第26行的地方将iou-thres的值更改为0.7

         iou_thres=0.7,  # for NMS

~~~~~~~~~~~然后正常运行就可以啦~~~~~~~~完结*★,°*:.☆( ̄▽ ̄)/$:*.°★*~~~~~~~~~~~~~

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号