当前位置:   article > 正文

助力质检维护,基于超轻量级分割模型ege-unet开发构建水泥基建裂缝分割识别系统_最新的裂纹分割模型

最新的裂纹分割模型

在前面的博文:

《参数量仅有50KB的超轻量级unet变种网络egeunet【参数和计算量降低494和160倍】医疗图像分割实践》

初步学习和实践了最新的超轻量级的unet变种网络在医疗图像领域内的表现,在上文中我们就说过会后续考虑将该网络模型应用于实际的生产业务场景里面助力实际业务发展。

本文的主要目的就是基于ege-unet网络模型来开发构建水泥基建场景中质检维护作业中常用到的裂缝分割检测识别系统,首先看下实际效果:

 原始的项目结构我们如果想要套入自己的数据集的话只需要稍作改变即可,首先来看数据集:

 标注数据如下所示:

 接下来修改configs目录下config_setting.py模块,如下所示:

  1. from torchvision import transforms
  2. from utils import *
  3. from datetime import datetime
  4. class setting_config:
  5. """
  6. the config of training setting.
  7. """
  8. network = 'egeunet'
  9. model_config = {
  10. 'num_classes': 1,
  11. 'input_channels': 3,
  12. 'c_list': [8,16,24,32,48,64],
  13. 'bridge': True,
  14. 'gt_ds': True,
  15. }
  16. datasets = 'isic18'
  17. if datasets == 'isic18':
  18. data_path = './data/isic2018/'
  19. elif datasets == 'isic17':
  20. data_path = './data/isic2017/'
  21. else:
  22. raise Exception('datasets in not right!')
  23. criterion = GT_BceDiceLoss(wb=1, wd=1)
  24. pretrained_path = './pre_trained/'
  25. num_classes = 1
  26. input_size_h = 256
  27. input_size_w = 256
  28. input_channels = 3
  29. distributed = False
  30. local_rank = -1
  31. num_workers = 0
  32. seed = 42
  33. world_size = None
  34. rank = None
  35. amp = False
  36. gpu_id = '0'
  37. batch_size = 192
  38. epochs = 100
  39. work_dir = 'results/' + network + '_' + datasets + '_' + datetime.now().strftime('%A_%d_%B_%Y_%Hh_%Mm_%Ss') + '/'
  40. print_interval = 20
  41. val_interval = 30
  42. save_interval = 10
  43. threshold = 0.5
  44. train_transformer = transforms.Compose([
  45. myNormalize(datasets, train=True),
  46. myToTensor(),
  47. myRandomHorizontalFlip(p=0.5),
  48. myRandomVerticalFlip(p=0.5),
  49. myRandomRotation(p=0.5, degree=[0, 360]),
  50. myResize(input_size_h, input_size_w)
  51. ])
  52. test_transformer = transforms.Compose([
  53. myNormalize(datasets, train=False),
  54. myToTensor(),
  55. myResize(input_size_h, input_size_w)
  56. ])
  57. opt = 'AdamW'
  58. assert opt in ['Adadelta', 'Adagrad', 'Adam', 'AdamW', 'Adamax', 'ASGD', 'RMSprop', 'Rprop', 'SGD'], 'Unsupported optimizer!'
  59. if opt == 'Adadelta':
  60. lr = 0.01 # default: 1.0 – coefficient that scale delta before it is applied to the parameters
  61. rho = 0.9 # default: 0.9 – coefficient used for computing a running average of squared gradients
  62. eps = 1e-6 # default: 1e-6 – term added to the denominator to improve numerical stability
  63. weight_decay = 0.05 # default: 0 – weight decay (L2 penalty)
  64. elif opt == 'Adagrad':
  65. lr = 0.01 # default: 0.01 – learning rate
  66. lr_decay = 0 # default: 0 – learning rate decay
  67. eps = 1e-10 # default: 1e-10 – term added to the denominator to improve numerical stability
  68. weight_decay = 0.05 # default: 0 – weight decay (L2 penalty)
  69. elif opt == 'Adam':
  70. lr = 0.001 # default: 1e-3 – learning rate
  71. betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
  72. eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
  73. weight_decay = 0.0001 # default: 0 – weight decay (L2 penalty)
  74. amsgrad = False # default: False – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond
  75. elif opt == 'AdamW':
  76. lr = 0.001 # default: 1e-3 – learning rate
  77. betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
  78. eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
  79. weight_decay = 1e-2 # default: 1e-2 – weight decay coefficient
  80. amsgrad = False # default: False – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond
  81. elif opt == 'Adamax':
  82. lr = 2e-3 # default: 2e-3 – learning rate
  83. betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
  84. eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
  85. weight_decay = 0 # default: 0 – weight decay (L2 penalty)
  86. elif opt == 'ASGD':
  87. lr = 0.01 # default: 1e-2 – learning rate
  88. lambd = 1e-4 # default: 1e-4 – decay term
  89. alpha = 0.75 # default: 0.75 – power for eta update
  90. t0 = 1e6 # default: 1e6 – point at which to start averaging
  91. weight_decay = 0 # default: 0 – weight decay
  92. elif opt == 'RMSprop':
  93. lr = 1e-2 # default: 1e-2 – learning rate
  94. momentum = 0 # default: 0 – momentum factor
  95. alpha = 0.99 # default: 0.99 – smoothing constant
  96. eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
  97. centered = False # default: False – if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
  98. weight_decay = 0 # default: 0 – weight decay (L2 penalty)
  99. elif opt == 'Rprop':
  100. lr = 1e-2 # default: 1e-2 – learning rate
  101. etas = (0.5, 1.2) # default: (0.5, 1.2) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors
  102. step_sizes = (1e-6, 50) # default: (1e-6, 50) – a pair of minimal and maximal allowed step sizes
  103. elif opt == 'SGD':
  104. lr = 0.01 # – learning rate
  105. momentum = 0.9 # default: 0 – momentum factor
  106. weight_decay = 0.05 # default: 0 – weight decay (L2 penalty)
  107. dampening = 0 # default: 0 – dampening for momentum
  108. nesterov = False # default: False – enables Nesterov momentum
  109. sch = 'CosineAnnealingLR'
  110. if sch == 'StepLR':
  111. step_size = epochs // 5 # – Period of learning rate decay.
  112. gamma = 0.5 # – Multiplicative factor of learning rate decay. Default: 0.1
  113. last_epoch = -1 # – The index of last epoch. Default: -1.
  114. elif sch == 'MultiStepLR':
  115. milestones = [60, 120, 150] # – List of epoch indices. Must be increasing.
  116. gamma = 0.1 # – Multiplicative factor of learning rate decay. Default: 0.1.
  117. last_epoch = -1 # – The index of last epoch. Default: -1.
  118. elif sch == 'ExponentialLR':
  119. gamma = 0.99 # – Multiplicative factor of learning rate decay.
  120. last_epoch = -1 # – The index of last epoch. Default: -1.
  121. elif sch == 'CosineAnnealingLR':
  122. T_max = 50 # – Maximum number of iterations. Cosine function period.
  123. eta_min = 0.00001 # – Minimum learning rate. Default: 0.
  124. last_epoch = -1 # – The index of last epoch. Default: -1.
  125. elif sch == 'ReduceLROnPlateau':
  126. mode = 'min' # – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.
  127. factor = 0.1 # – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
  128. patience = 10 # – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.
  129. threshold = 0.0001 # – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.
  130. threshold_mode = 'rel' # – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.
  131. cooldown = 0 # – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
  132. min_lr = 0 # – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0.
  133. eps = 1e-08 # – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
  134. elif sch == 'CosineAnnealingWarmRestarts':
  135. T_0 = 50 # – Number of iterations for the first restart.
  136. T_mult = 2 # – A factor increases T_{i} after a restart. Default: 1.
  137. eta_min = 1e-6 # – Minimum learning rate. Default: 0.
  138. last_epoch = -1 # – The index of last epoch. Default: -1.
  139. elif sch == 'WP_MultiStepLR':
  140. warm_up_epochs = 10
  141. gamma = 0.1
  142. milestones = [125, 225]
  143. elif sch == 'WP_CosineLR':
  144. warm_up_epochs = 20

可以看到:这里datasets我没有修改,数据集是直接在内容层面替换了isic2018的数据集,大家也可以这么做,当然了也可以再代码里面加入新的逻辑,比如:

  1. datasets = 'self'
  2. if datasets == 'isic18':
  3. data_path = './data/isic2018/'
  4. elif datasets == 'isic17':
  5. data_path = './data/isic2017/'
  6. elif datasets == "self":
  7. data_path = "/data/self/"
  8. else:
  9. raise Exception('datasets in not right!')

我是不想麻烦,所以选择了直接数据替换的方式。

主要的配置参数如下:

  1. pretrained_path = './pre_trained/'
  2. num_classes = 1
  3. input_size_h = 256
  4. input_size_w = 256
  5. input_channels = 3
  6. distributed = False
  7. local_rank = -1
  8. num_workers = 0
  9. seed = 42
  10. world_size = None
  11. rank = None
  12. amp = False
  13. gpu_id = '0'
  14. batch_size = 192
  15. epochs = 100
  16. work_dir = 'results/' + network + '_' + datasets + '_' + datetime.now().strftime('%A_%d_%B_%Y_%Hh_%Mm_%Ss') + '/'
  17. print_interval = 20
  18. val_interval = 30
  19. save_interval = 10
  20. threshold = 0.5

官方默认是300个epoch的迭代计算,这里我为了节省时间修改为了100个epoch。

默认batch_size为8,这里我的显存比较大我修改为了192,可以根据自己的实际情况进行修改即可。

默认save_interval为100,这里因为我的总训练epoch就是100,所以我修改为了10,让模型训练过程中的存储频率更高点。

之后即可在终端执行train.py模块,输出如下所示:

 接下来看下outputs中存储的训练实例,如下所示:

 这里看下实际可视化推理效果:

对比分割结果显示:

 单独分割结果显示:

 在推理时间消耗上面跟前文的医疗图像相比更长了一些,这个跟图像本身有关,医疗的数据集分辨率更低,不过这里单张图像的推理时耗相比于以往的模型依旧是很有优势的。

如果实际业务中有分割的相关需求都可以自行开发尝试下。

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号