当前位置:   article > 正文

YOLOv9有效提点|加入SE、CBAM、ECA、SimAM等几十种注意力机制(一)_yolov9加注意力机制

yolov9加注意力机制


专栏介绍:YOLOv9改进系列 | 包含深度学习最新创新,助力高效涨点!!!


一、本文介绍

        本文将以SE注意力机制为例,演示如何在YOLOv9种添加注意力机制!


 《Squeeze-and-Excitation Networks》

        SENet提出了一种基于“挤压和激励”(SE)的注意力模块,用于改进卷积神经网络(CNN)的性能。SE块可以适应地重新校准通道特征响应,通过建模通道之间的相互依赖关系来增强CNN的表示能力。这些块可以堆叠在一起形成SENet架构,使其在多个数据集上具有非常有效的泛化能力。


《CBAM:Convolutional Block Attention Module》

        CBAM模块能够同时关注CNN的通道和空间两个维度,对输入特征图进行自适应细化。这个模块轻量级且通用,可以无缝集成到任何CNN架构中,并可以进行端到端训练。实验表明,使用CBAM可以显著提高各种模型的分类和检测性能。


《ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks》

        通道注意力模块ECA,可以提升深度卷积神经网络的性能,同时不增加模型复杂性。通过改进现有的通道注意力模块,作者提出了一种无需降维的局部交互策略,并自适应选择卷积核大小。ECA模块在保持性能的同时更高效,实验表明其在多个任务上具有优势。


《SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks》

        SimAM一种概念简单且非常有效的注意力模块。不同于现有的通道/空域注意力模块,该模块无需额外参数为特征图推导出3D注意力权值。具体来说,SimAM的作者基于著名的神经科学理论提出优化能量函数以挖掘神经元的重要性。该模块的另一个优势在于:大部分操作均基于所定义的能量函数选择,避免了过多的结构调整

适用检测目标:   YOLOv9模块通用改进


二、改进步骤

        以下以SE注意力机制为例在YOLOv9中加入注意力代码,其他注意力机制同理!

 2.1 复制代码

        将SE的代码辅助到models包下common.py文件中。

 2.2 修改yolo.py文件

        在yolo.py脚本的第700行(可能因YOLOv9版本变化而变化)增加下方代码。

  1. elif m in (SE,):
  2. args.insert(0, ch[f])

2.3 创建配置文件

        创建模型配置文件(yaml文件),将我们所作改进加入到配置文件中(这一步的配置文件可以复制models  - > detect 下的yaml修改。)。对YOLO系列yaml文件不熟悉的同学可以看我往期的yaml详解教学!

YOLO系列 “.yaml“文件解读-CSDN博客

  1. # YOLOv9
  2. # Powered bu https://blog.csdn.net/StopAndGoyyy
  3. # parameters
  4. nc: 80 # number of classes
  5. depth_multiple: 1 # model depth multiple
  6. width_multiple: 1 # layer channel multiple
  7. #activation: nn.LeakyReLU(0.1)
  8. #activation: nn.ReLU()
  9. # anchors
  10. anchors: 3
  11. # YOLOv9 backbone
  12. backbone:
  13. [
  14. [-1, 1, Silence, []],
  15. # conv down
  16. [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
  17. # conv down
  18. [-1, 1, Conv, [128, 3, 2]], # 2-P2/4
  19. # elan-1 block
  20. [-1, 1, RepNCSPELAN4, [256, 128, 64, 1]], # 3
  21. # avg-conv down
  22. [-1, 1, ADown, [256]], # 4-P3/8
  23. # elan-2 block
  24. [-1, 1, RepNCSPELAN4, [512, 256, 128, 1]], # 5
  25. # avg-conv down
  26. [-1, 1, ADown, [512]], # 6-P4/16
  27. # elan-2 block
  28. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 7
  29. # avg-conv down
  30. [-1, 1, ADown, [512]], # 8-P5/32
  31. # elan-2 block
  32. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 9
  33. ]
  34. # YOLOv9 head
  35. head:
  36. [
  37. # elan-spp block
  38. [-1, 1, SPPELAN, [512, 256]], # 10
  39. # up-concat merge
  40. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  41. [[-1, 7], 1, Concat, [1]], # cat backbone P4
  42. # elan-2 block
  43. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 13
  44. # up-concat merge
  45. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  46. [[-1, 5], 1, Concat, [1]], # cat backbone P3
  47. # elan-2 block
  48. [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]], # 16 (P3/8-small)
  49. # avg-conv-down merge
  50. [-1, 1, ADown, [256]],
  51. [[-1, 13], 1, Concat, [1]], # cat head P4
  52. # elan-2 block
  53. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 19 (P4/16-medium)
  54. # avg-conv-down merge
  55. [-1, 1, ADown, [512]],
  56. [[-1, 10], 1, Concat, [1]], # cat head P5
  57. # elan-2 block
  58. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 22 (P5/32-large)
  59. # multi-level reversible auxiliary branch
  60. # routing
  61. [5, 1, CBLinear, [[256]]], # 23
  62. [7, 1, CBLinear, [[256, 512]]], # 24
  63. [9, 1, CBLinear, [[256, 512, 512]]], # 25
  64. # conv down
  65. [0, 1, Conv, [64, 3, 2]], # 26-P1/2
  66. # conv down
  67. [-1, 1, Conv, [128, 3, 2]], # 27-P2/4
  68. # elan-1 block
  69. [-1, 1, RepNCSPELAN4, [256, 128, 64, 1]], # 28
  70. # avg-conv down fuse
  71. [-1, 1, ADown, [256]], # 29-P3/8
  72. [[23, 24, 25, -1], 1, CBFuse, [[0, 0, 0]]], # 30
  73. # elan-2 block
  74. [-1, 1, RepNCSPELAN4, [512, 256, 128, 1]], # 31
  75. # avg-conv down fuse
  76. [-1, 1, ADown, [512]], # 32-P4/16
  77. [[24, 25, -1], 1, CBFuse, [[1, 1]]], # 33
  78. # elan-2 block
  79. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 34
  80. # avg-conv down fuse
  81. [-1, 1, ADown, [512]], # 35-P5/32
  82. [[25, -1], 1, CBFuse, [[2]]], # 36
  83. # elan-2 block
  84. [-1, 1, RepNCSPELAN4, [512, 512, 256, 1]], # 37
  85. [-1, 1, SE, [16]], # 38
  86. # detection head
  87. # detect
  88. [[31, 34, 38, 16, 19, 22], 1, DualDDetect, [nc]], # DualDDetect(A3, A4, A5, P3, P4, P5)
  89. ]

3.1 训练过程

        最后,复制我们创建的模型配置,填入训练脚本(train_dual)中(不会训练的同学可以参考我之前的文章。),运行即可。

YOLOv9 最简训练教学!-CSDN博客

​​

​​


SE代码

  1. class SE(nn.Module):
  2. def __init__(self, channel, reduction=16):
  3. super(SE, self).__init__()
  4. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  5. self.fc = nn.Sequential(
  6. nn.Linear(channel, channel // reduction, bias=False),
  7. nn.ReLU(),
  8. nn.Linear(channel // reduction, channel, bias=False),
  9. nn.Sigmoid()
  10. )
  11. def forward(self, x):
  12. b, c, _, _ = x.size()
  13. y = self.avg_pool(x).view(b, c)
  14. y = self.fc(y).view(b, c, 1, 1)
  15. return x * y.expand_as(x)

CBAM代码

  1. import numpy as np
  2. import torch
  3. from torch import nn
  4. from torch.nn import init
  5. class ChannelAttention(nn.Module):
  6. def __init__(self, channel, reduction=16):
  7. super().__init__()
  8. self.maxpool = nn.AdaptiveMaxPool2d(1)
  9. self.avgpool = nn.AdaptiveAvgPool2d(1)
  10. self.se = nn.Sequential(
  11. nn.Conv2d(channel, channel // reduction, 1, bias=False),
  12. nn.ReLU(),
  13. nn.Conv2d(channel // reduction, channel, 1, bias=False)
  14. )
  15. self.sigmoid = nn.Sigmoid()
  16. def forward(self, x):
  17. max_result = self.maxpool(x)
  18. avg_result = self.avgpool(x)
  19. max_out = self.se(max_result)
  20. avg_out = self.se(avg_result)
  21. output = self.sigmoid(max_out + avg_out)
  22. return output
  23. class SpatialAttention(nn.Module):
  24. def __init__(self, kernel_size=7):
  25. super().__init__()
  26. self.conv = nn.Conv2d(2, 1, kernel_size=kernel_size, padding=kernel_size // 2)
  27. self.sigmoid = nn.Sigmoid()
  28. def forward(self, x):
  29. max_result, _ = torch.max(x, dim=1, keepdim=True)
  30. avg_result = torch.mean(x, dim=1, keepdim=True)
  31. result = torch.cat([max_result, avg_result], 1)
  32. output = self.conv(result)
  33. output = self.sigmoid(output)
  34. return output
  35. class CBAMBlock(nn.Module):
  36. def __init__(self, channel=512, reduction=16, kernel_size=7):
  37. super().__init__()
  38. self.ca = ChannelAttention(channel=channel, reduction=reduction)
  39. self.sa = SpatialAttention(kernel_size=kernel_size)
  40. def init_weights(self):
  41. for m in self.modules():
  42. if isinstance(m, nn.Conv2d):
  43. init.kaiming_normal_(m.weight, mode='fan_out')
  44. if m.bias is not None:
  45. init.constant_(m.bias, 0)
  46. elif isinstance(m, nn.BatchNorm2d):
  47. init.constant_(m.weight, 1)
  48. init.constant_(m.bias, 0)
  49. elif isinstance(m, nn.Linear):
  50. init.normal_(m.weight, std=0.001)
  51. if m.bias is not None:
  52. init.constant_(m.bias, 0)
  53. def forward(self, x):
  54. b, c, _, _ = x.size()
  55. out = x * self.ca(x)
  56. out = out * self.sa(out)
  57. return out

ECA代码

  1. class ECAAttention(nn.Module):
  2. def __init__(self, kernel_size=3):
  3. super().__init__()
  4. self.gap = nn.AdaptiveAvgPool2d(1)
  5. self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=(kernel_size - 1) // 2)
  6. self.sigmoid = nn.Sigmoid()
  7. def init_weights(self):
  8. for m in self.modules():
  9. if isinstance(m, nn.Conv2d):
  10. init.kaiming_normal_(m.weight, mode='fan_out')
  11. if m.bias is not None:
  12. init.constant_(m.bias, 0)
  13. elif isinstance(m, nn.BatchNorm2d):
  14. init.constant_(m.weight, 1)
  15. init.constant_(m.bias, 0)
  16. elif isinstance(m, nn.Linear):
  17. init.normal_(m.weight, std=0.001)
  18. if m.bias is not None:
  19. init.constant_(m.bias, 0)
  20. def forward(self, x):
  21. y = self.gap(x) # bs,c,1,1
  22. y = y.squeeze(-1).permute(0, 2, 1) # bs,1,c
  23. y = self.conv(y) # bs,1,c
  24. y = self.sigmoid(y) # bs,1,c
  25. y = y.permute(0, 2, 1).unsqueeze(-1) # bs,c,1,1
  26. return x * y.expand_as(x)

SimAM代码

  1. class SimAM(torch.nn.Module):
  2. def __init__(self, e_lambda=1e-4):
  3. super(SimAM, self).__init__()
  4. self.activaton = nn.Sigmoid()
  5. self.e_lambda = e_lambda
  6. def __repr__(self):
  7. s = self.__class__.__name__ + '('
  8. s += ('lambda=%f)' % self.e_lambda)
  9. return s
  10. @staticmethod
  11. def get_module_name():
  12. return "simam"
  13. def forward(self, x):
  14. b, c, h, w = x.size()
  15. n = w * h - 1
  16. x_minus_mu_square = (x - x.mean(dim=[2, 3], keepdim=True)).pow(2)
  17. y = x_minus_mu_square / (4 * (x_minus_mu_square.sum(dim=[2, 3], keepdim=True) / n + self.e_lambda)) + 0.5
  18. return x * self.activaton(y)

如果觉得本文章有用的话给博主点个关注吧!


声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号