当前位置:   article > 正文

接触网绝缘子缺陷检测项目_缺陷检测 电网

缺陷检测 电网

目录

1. 接触网绝缘子作用

2. 接触网绝缘子破损原因及危害

3.接触网绝缘子缺陷检测图像数据集介绍

 4. 缺陷检测模型介绍

4.1 efficientnet模型介绍

 4.2 YOLOv3模型介绍

4.3 efficientnet-YOLOv3模型介绍

 5. 模型训练与测试

 5.1 模型训练

5.2 检测性能测试

参考

1. 接触网绝缘子作用

       绝缘子是接触网中广泛应用的重要部件之一,绝缘子用悬吊的支持接触悬挂并使用带电体与接地之间保持电气绝缘。我国电网正处于快速、健康、持续发展的新时期,绝缘子在输电线路的安全稳定运行中发挥重要作用,所以对绝缘子的需求很大。

  高压电线连接塔的一端挂了很多盘状的绝缘体,它是为了增加爬电距离的,通常由玻璃或陶瓷制成,就叫绝缘子。绝缘子是一种绝缘控件,在架空输电线路中起到重要作用,即起到支撑导线和防止电流回地的作用。以前绝缘子多用于电线杆,现在已逐步发展成为挂在高压电线连接塔一端的盘状绝缘体,通常是由陶瓷或玻璃制成的。绝缘子要保证在环境和电负荷条件发生变化时,各种机电应力保持不变,否则不仅起不到应有的作用,反而会损害整条线路的使用寿命。

2. 接触网绝缘子破损原因及危害

   (1)产品质量不良。由于雨季吸潮,绝缘子的绝缘性能低,而发生绝缘子闪络击穿或受热膨胀爆炸,导致炸裂,最终使绝缘子丧失绝缘。

   (2)施工不当损伤绝缘子。由于外力原因使绝缘子产生裂纹、损伤或缺釉现象,在阴雨天气导致闪络、击穿故障的发生。

3.接触网绝缘子缺陷检测图像数据集介绍

        图像数据的收集是接触网绝缘子缺陷检测的第一步,也是至关重要的一步,本项目共收集接触网绝缘子缺陷检测图像数据480幅。图像分辨率均为416×416,采集手段为夜间巡检,并利用labelimg软件对其中包含的缺陷进行标注,标签类型为两类(1.正常绝缘子;2. 损坏绝缘子)。

 4. 缺陷检测模型介绍

        本项目采用YOLOv3及轻量级网络efficientnet改进的YOLOv3对接触网缺陷绝缘子进行检测。

4.1 efficientnet模型介绍

      EfficientNet提出的compound scaling(depth, width, resolution)的方法也在论文中实验用到MobileNet v1/v2和ResNet-50上,在增加FLOPs与提升accuracy之间找最佳的trade-off。
EfficientNet v2在v1基础上提高了训练速度,同时也以损失一部分accuracy的同时大大减少了参数量,加快了推理速度(跟芯片有关,加速这个未必是确定的)。

tensorflow2版本的efficientnet实现方法:

  1. import math
  2. from copy import deepcopy
  3. import tensorflow as tf
  4. from tensorflow.keras import backend, layers
  5. #-------------------------------------------------#
  6. DEFAULT_BLOCKS_ARGS = [
  7. {'kernel_size': 3, 'repeats': 1, 'filters_in': 32, 'filters_out': 16,
  8. 'expand_ratio': 1, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  9. {'kernel_size': 3, 'repeats': 2, 'filters_in': 16, 'filters_out': 24,
  10. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  11. {'kernel_size': 5, 'repeats': 2, 'filters_in': 24, 'filters_out': 40,
  12. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  13. {'kernel_size': 3, 'repeats': 3, 'filters_in': 40, 'filters_out': 80,
  14. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  15. {'kernel_size': 5, 'repeats': 3, 'filters_in': 80, 'filters_out': 112,
  16. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  17. {'kernel_size': 5, 'repeats': 4, 'filters_in': 112, 'filters_out': 192,
  18. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  19. {'kernel_size': 3, 'repeats': 1, 'filters_in': 192, 'filters_out': 320,
  20. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}
  21. ]
  22. #-------------------------------------------------#
  23. CONV_KERNEL_INITIALIZER = {
  24. 'class_name': 'VarianceScaling',
  25. 'config': {
  26. 'scale': 2.0,
  27. 'mode': 'fan_out',
  28. 'distribution': 'normal'
  29. }
  30. }
  31. #-------------------------------------------------#
  32. def correct_pad(inputs, kernel_size):
  33. img_dim = 1
  34. input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)]
  35. if isinstance(kernel_size, int):
  36. kernel_size = (kernel_size, kernel_size)
  37. if input_size[0] is None:
  38. adjust = (1, 1)
  39. else:
  40. adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)
  41. correct = (kernel_size[0] // 2, kernel_size[1] // 2)
  42. return ((correct[0] - adjust[0], correct[0]),
  43. (correct[1] - adjust[1], correct[1]))
  44. #-------------------------------------------------#
  45. def round_filters(filters, divisor, width_coefficient):
  46. filters *= width_coefficient
  47. new_filters = max(divisor, int(filters + divisor / 2) // divisor * divisor)
  48. if new_filters < 0.9 * filters:
  49. new_filters += divisor
  50. return int(new_filters)
  51. #-------------------------------------------------#
  52. def round_repeats(repeats, depth_coefficient):
  53. return int(math.ceil(depth_coefficient * repeats))
  54. #-------------------------------------------------#
  55. # efficient_block
  56. #-------------------------------------------------#
  57. def block(inputs, activation_fn=tf.nn.swish, drop_rate=0., name='',
  58. filters_in=32, filters_out=16, kernel_size=3, strides=1,
  59. expand_ratio=1, se_ratio=0., id_skip=True):
  60. filters = filters_in * expand_ratio
  61. #-------------------------------------------------#
  62. if expand_ratio != 1:
  63. x = layers.Conv2D(filters, 1,
  64. padding='same',
  65. use_bias=False,
  66. kernel_initializer=CONV_KERNEL_INITIALIZER,
  67. name=name + 'expand_conv')(inputs)
  68. x = layers.BatchNormalization(axis=3, name=name + 'expand_bn')(x)
  69. x = layers.Activation(activation_fn, name=name + 'expand_activation')(x)
  70. else:
  71. x = inputs
  72. #------------------------------------------------------#
  73. if strides == 2:
  74. x = layers.ZeroPadding2D(padding=correct_pad(x, kernel_size),
  75. name=name + 'dwconv_pad')(x)
  76. conv_pad = 'valid'
  77. else:
  78. conv_pad = 'same'
  79. x = layers.DepthwiseConv2D(kernel_size,
  80. strides=strides,
  81. padding=conv_pad,
  82. use_bias=False,
  83. depthwise_initializer=CONV_KERNEL_INITIALIZER,
  84. name=name + 'dwconv')(x)
  85. x = layers.BatchNormalization(axis=3, name=name + 'bn')(x)
  86. x = layers.Activation(activation_fn, name=name + 'activation')(x)
  87. #------------------------------------------------------#
  88. if 0 < se_ratio <= 1:
  89. filters_se = max(1, int(filters_in * se_ratio))
  90. se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x)
  91. se = layers.Reshape((1, 1, filters), name=name + 'se_reshape')(se)
  92. #------------------------------------------------------#
  93. se = layers.Conv2D(filters_se, 1,
  94. padding='same',
  95. activation=activation_fn,
  96. kernel_initializer=CONV_KERNEL_INITIALIZER,
  97. name=name + 'se_reduce')(se)
  98. se = layers.Conv2D(filters, 1,
  99. padding='same',
  100. activation='sigmoid',
  101. kernel_initializer=CONV_KERNEL_INITIALIZER,
  102. name=name + 'se_expand')(se)
  103. x = layers.multiply([x, se], name=name + 'se_excite')
  104. #------------------------------------------------------#
  105. x = layers.Conv2D(filters_out, 1,
  106. padding='same',
  107. use_bias=False,
  108. kernel_initializer=CONV_KERNEL_INITIALIZER,
  109. name=name + 'project_conv')(x)
  110. x = layers.BatchNormalization(axis=3, name=name + 'project_bn')(x)
  111. #------------------------------------------------------#
  112. if (id_skip is True and strides == 1 and filters_in == filters_out):
  113. if drop_rate > 0:
  114. x = layers.Dropout(drop_rate,
  115. noise_shape=(None, 1, 1, 1),
  116. name=name + 'drop')(x)
  117. x = layers.add([x, inputs], name=name + 'add')
  118. return x
  119. def EfficientNet(width_coefficient,
  120. depth_coefficient,
  121. drop_connect_rate=0.2,
  122. depth_divisor=8,
  123. activation_fn=tf.nn.swish,
  124. blocks_args=DEFAULT_BLOCKS_ARGS,
  125. inputs=None,
  126. **kwargs):
  127. img_input = inputs
  128. #-------------------------------------------------#
  129. x = img_input
  130. x = layers.ZeroPadding2D(padding=correct_pad(x, 3),
  131. name='stem_conv_pad')(x)
  132. x = layers.Conv2D(round_filters(32, depth_divisor, width_coefficient), 3,
  133. strides=2,
  134. padding='valid',
  135. use_bias=False,
  136. kernel_initializer=CONV_KERNEL_INITIALIZER,
  137. name='stem_conv')(x)
  138. x = layers.BatchNormalization(axis=3, name='stem_bn')(x)
  139. x = layers.Activation(activation_fn, name='stem_activation')(x)
  140. #-------------------------------------------------#
  141. blocks_args = deepcopy(blocks_args)
  142. #-------------------------------------------------#
  143. b = 0
  144. blocks = float(sum(args['repeats'] for args in blocks_args))
  145. feats = []
  146. filters_outs = []
  147. #------------------------------------------------------------------------------#
  148. for (i, args) in enumerate(blocks_args):
  149. assert args['repeats'] > 0
  150. args['filters_in'] = round_filters(args['filters_in'], depth_divisor, width_coefficient)
  151. args['filters_out'] = round_filters(args['filters_out'], depth_divisor, width_coefficient)
  152. for j in range(round_repeats(args.pop('repeats'), depth_coefficient)):
  153. if j > 0:
  154. args['strides'] = 1
  155. args['filters_in'] = args['filters_out']
  156. x = block(x, activation_fn, drop_connect_rate * b / blocks,
  157. name='block{}{}_'.format(i + 1, chr(j + 97)), **args)
  158. b += 1
  159. feats.append(x)
  160. if i == 2 or i == 4 or i == 6:
  161. filters_outs.append(args['filters_out'])
  162. return feats, filters_outs
  163. def EfficientNetB0(inputs=None, **kwargs):
  164. return EfficientNet(1.0, 1.0, inputs=inputs, **kwargs)
  165. def EfficientNetB1(inputs=None, **kwargs):
  166. return EfficientNet(1.0, 1.1, inputs=inputs, **kwargs)
  167. def EfficientNetB2(inputs=None, **kwargs):
  168. return EfficientNet(1.1, 1.2, inputs=inputs, **kwargs)
  169. def EfficientNetB3(inputs=None, **kwargs):
  170. return EfficientNet(1.2, 1.4, inputs=inputs, **kwargs)
  171. def EfficientNetB4(inputs=None, **kwargs):
  172. return EfficientNet(1.4, 1.8, inputs=inputs, **kwargs)
  173. def EfficientNetB5(inputs=None, **kwargs):
  174. return EfficientNet(1.6, 2.2, inputs=inputs, **kwargs)
  175. def EfficientNetB6(inputs=None, **kwargs):
  176. return EfficientNet(1.8, 2.6, inputs=inputs, **kwargs)
  177. def EfficientNetB7(inputs=None, **kwargs):
  178. return EfficientNet(2.0, 3.1, inputs=inputs, **kwargs)
  179. if __name__ == '__main__':
  180. print(EfficientNetB0())

 4.2 YOLOv3模型介绍

     YOLOv3相比于之前的yolo1和yolo2,改进较大,主要改进方向有:
      1、使用了残差网络Residual,残差卷积就是进行一次3X3、步长为2的卷积,然后保存该卷积layer,再进行一次1X1的卷积和一次3X3的卷积,并把这个结果加上layer作为最后的结果, 残差网络的特点是容易优化,并且能够通过增加相当的深度来提高准确率。其内部的残差块使用了跳跃连接,缓解了在深度神经网络中增加深度带来的梯度消失问题。
     2、提取多特征层进行目标检测,一共提取三个特征层,特征层的shape分别为(13,13,75),(26,26,75),(52,52,75),最后一个维度为75是因为该图是基于voc数据集的,它的类为20种,yolo3只有针对每一个特征层存在3个先验框,所以最后维度为3x25;
如果使用的是coco训练集,类则为80种,最后的维度应该为255 = 3x85,三个特征层的shape为(13,13,255),(26,26,255),(52,52,255)
      3、其采用反卷积UmSampling2d设计,逆卷积相对于卷积在神经网络结构的正向和反向传播中做相反的运算,其可以更多更好的提取出特征。

tensorflow2.0版本的darknet53结构

  1. from functools import wraps
  2. from tensorflow.keras.initializers import RandomNormal
  3. from tensorflow.keras.layers import (Add, BatchNormalization, Conv2D, LeakyReLU,
  4. ZeroPadding2D)
  5. from tensorflow.keras.regularizers import l2
  6. from utils.utils import compose
  7. #------------------------------------------------------#
  8. @wraps(Conv2D)
  9. def DarknetConv2D(*args, **kwargs):
  10. darknet_conv_kwargs = {'kernel_initializer' : RandomNormal(stddev=0.02), 'kernel_regularizer' : l2(kwargs.get('weight_decay', 5e-4))}
  11. darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2, 2) else 'same'
  12. try:
  13. del kwargs['weight_decay']
  14. except:
  15. pass
  16. darknet_conv_kwargs.update(kwargs)
  17. return Conv2D(*args, **darknet_conv_kwargs)
  18. #---------------------------------------------------#
  19. def DarknetConv2D_BN_Leaky(*args, **kwargs):
  20. no_bias_kwargs = {'use_bias': False}
  21. no_bias_kwargs.update(kwargs)
  22. return compose(
  23. DarknetConv2D(*args, **no_bias_kwargs),
  24. BatchNormalization(),
  25. LeakyReLU(alpha=0.1))
  26. #---------------------------------------------------------------------#
  27. def resblock_body(x, num_filters, num_blocks, weight_decay=5e-4):
  28. x = ZeroPadding2D(((1,0),(1,0)))(x)
  29. x = DarknetConv2D_BN_Leaky(num_filters, (3,3), strides=(2,2), weight_decay=weight_decay)(x)
  30. for i in range(num_blocks):
  31. y = DarknetConv2D_BN_Leaky(num_filters//2, (1,1), weight_decay=weight_decay)(x)
  32. y = DarknetConv2D_BN_Leaky(num_filters, (3,3), weight_decay=weight_decay)(y)
  33. x = Add()([x,y])
  34. return x
  35. #---------------------------------------------------#
  36. def darknet_body(x, weight_decay=5e-4):
  37. # 416,416,3 -> 416,416,32
  38. x = DarknetConv2D_BN_Leaky(32, (3,3), weight_decay=weight_decay)(x)
  39. # 416,416,32 -> 208,208,64
  40. x = resblock_body(x, 64, 1)
  41. # 208,208,64 -> 104,104,128
  42. x = resblock_body(x, 128, 2)
  43. # 104,104,128 -> 52,52,256
  44. x = resblock_body(x, 256, 8)
  45. feat1 = x
  46. # 52,52,256 -> 26,26,512
  47. x = resblock_body(x, 512, 8)
  48. feat2 = x
  49. # 26,26,512 -> 13,13,1024
  50. x = resblock_body(x, 1024, 4)
  51. feat3 = x
  52. return feat1, feat2, feat3

  tensorflow2版本的FPN结构

  1. def yolo_body(input_shape, anchors_mask, num_classes, weight_decay=5e-4):
  2. inputs = Input(input_shape)
  3. #---------------------------------------------------#
  4. C3, C4, C5 = darknet_body(inputs, weight_decay)
  5. #---------------------------------------------------#
  6. # 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512
  7. x = make_five_conv(C5, 512, weight_decay)
  8. P5 = make_yolo_head(x, 512, len(anchors_mask[0]) * (num_classes+5), weight_decay)
  9. # 13,13,512 -> 13,13,256 -> 26,26,256
  10. x = compose(DarknetConv2D_BN_Leaky(256, (1,1), weight_decay=weight_decay), UpSampling2D(2))(x)
  11. # 26,26,256 + 26,26,512 -> 26,26,768
  12. x = Concatenate()([x, C4])
  13. #---------------------------------------------------#
  14. # 26,26,768 -> 26,26,256 -> 26,26,512 -> 26,26,256 -> 26,26,512 -> 26,26,256
  15. x = make_five_conv(x, 256, weight_decay)
  16. P4 = make_yolo_head(x, 256, len(anchors_mask[1]) * (num_classes+5), weight_decay)
  17. # 26,26,256 -> 26,26,128 -> 52,52,128
  18. x = compose(DarknetConv2D_BN_Leaky(128, (1,1), weight_decay=weight_decay), UpSampling2D(2))(x)
  19. # 52,52,128 + 52,52,256 -> 52,52,384
  20. x = Concatenate()([x, C3])
  21. #---------------------------------------------------#
  22. # 52,52,384 -> 52,52,128 -> 52,52,256 -> 52,52,128 -> 52,52,256 -> 52,52,128
  23. x = make_five_conv(x, 128, weight_decay)
  24. P3 = make_yolo_head(x, 128, len(anchors_mask[2]) * (num_classes+5), weight_decay)
  25. return Model(inputs, [P5, P4, P3])

4.3 efficientnet-YOLOv3模型介绍

     2019年,谷歌新出EfficientNet,网络如其名,这个网络非常的有效率,主要为以下几点:

     1、网络要可以训练,可以收敛。
     2、参数量要比较小,方便训练,提高速度。
     3、创新神经网络的结构,学到更深层语义特征。

 5. 模型训练与测试

 5.1 模型训练

       将接触网绝缘子检测图像数据集按9:1随机选取训练集及测试集。训练环境tensorflow2.2,3080显卡。采用带动量的随机梯度下降优化算法(SGDM)进行训练, 动量 = 0.937,权值衰减系数=0.0005。训练过程中损失值曲线如下:

YOLOv3

efficientnet-YOLOv3

YOLOv3

efficientnet-YOLOv3

5.2 检测性能测试

     经测试YOLOv3对接触网绝缘子检测的mAP值=90.42%,FPS=28.97。efficientnet-YOLOv3对接触网绝缘子检测的mAP值=88.99%,FPS=58.42。

YOLOv3
efficientnet-YOLOv3

利用预测框的尺寸对原图像中目标检测裁剪的python代码:

  1. for i, c in list(enumerate(out_boxes)):
  2. top, left, bottom, right = out_boxes[i]
  3. top = max(0, np.floor(top).astype('int32'))
  4. left = max(0, np.floor(left).astype('int32'))
  5. bottom = min(image.size[1], np.floor(bottom).astype('int32'))
  6. right = min(image.size[0], np.floor(right).astype('int32'))
  7. dir_save_path = "img_crop"
  8. if not os.path.exists(dir_save_path):
  9. os.makedirs(dir_save_path)
  10. crop_image = image.crop([left, top, right, bottom])
  11. crop_image.save(os.path.join(dir_save_path, "crop_" + str(i) + ".png"), quality=95, subsampling=0)
  12. print("save crop_" + str(i) + ".png to " + dir_save_path)

参考:

[1] 绝缘子的作用,未来发展会如何? - 知乎
[2]YOLOv3介绍((感谢Bubbliiiing大神))

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号