当前位置:   article > 正文

ghost的网络结构和实现代码总结(torch)_ghost卷积代码

ghost卷积代码

ghost模型的效果

GhostNet是为移动设备设计的,文中没有给出在GPU端的开销情况,在华为P30 Pro上进行了GhostNet的实际推理速度测试,并和其他模型进行了对比,所以在GPU上的实际效果还需要测试。和MobileNetV3相比,相同FLOPs的情况下,大约可以提升0.3%~0.5%不等的top-1准确率,相同Latency的情况下,大约可以提升0.5%的top1准确率。在Top1准确率在75%的情况下,GhostNet的Latency大约是40ms,而MobileNetV3大约是45ms。得出结论认为GhostNet总体好于当前的MobileNetV3,MobileNetV2,EfficientNet,ShuffleNetV2,MnasNet,FBNet,ProxylessNAS等网络。【0】

 

ghost模型实现原理

ghost模型的整个结构照搬了mobilenetv3,只是把基本单元给替换掉了,ghost模型的基本单元如上图所示,将原本的一步卷积变为两步卷积,第一步首先进行常规卷积,但是减少了输出通道数,第二步在第一步的基础上进行深度可分离卷积(仅取第一步),这里深度可分离卷积跟常规深度可分离卷积有点区别,常规深度可分离卷积(仅取第一步)的输入输出通道数完全相等,卷积核数量也等于输入通道数,这里输出通道数可能是输入通道数的整数倍,卷积核数量等于输出通道数。此外,第二步卷积还有并行的一个连接分支,这个分支直接就是第一步卷积的输出。ghost卷积模块的输出通道数等于第一步卷积后的通道数c加上第二步卷积后的通道数n*c,所以最终通道数为(n+1)*c。此操作的依据是经过观察,发现大部分卷积操作后,输出的特征图很多通道之间存在很高的相似性,那我们就可以经过第一步卷积得到那些没有相似性的通道,然后经过第二步卷积得到剩余那些有相似性的通道,可视化如下图:

ghost模型实现代码

ghost模型由华为提出,在github上有官方的torch和tensorflow代码,以下为官方torch代码。

  1. # 官方代码的ghost网络模型 https://github.com/huawei-noah/CV-Backbones
  2. """
  3. Creates a GhostNet Model as defined in:
  4. GhostNet: More Features from Cheap Operations By Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu.
  5. https://arxiv.org/abs/1911.11907
  6. Modified from https://github.com/d-li14/mobilenetv3.pytorch and https://github.com/rwightman/pytorch-image-models
  7. """
  8. import torch
  9. import torch.nn as nn
  10. import torch.nn.functional as F
  11. import math
  12. __all__ = ['ghost_net']
  13. # 设定整个模型的所有BN层的衰减系数,该系数用于平滑统计的均值和方差,torch与tf不太一样,两者以1为互补
  14. momentum = 0.01 # 官方默认0.1,越小,最终的统计均值和方差越接近于整体均值和方差,前提是batchsize足够大
  15. # 保证v可以被divisor整除
  16. def _make_divisible(v, divisor, min_value=None):
  17. if min_value is None:
  18. min_value = divisor
  19. new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
  20. if new_v < 0.9 * v:
  21. new_v += divisor
  22. return new_v
  23. # 定义激活函数
  24. def hard_sigmoid(x, inplace: bool = False):
  25. if inplace:
  26. return x.add_(3.).clamp_(0., 6.).div_(6.)
  27. else:
  28. return F.relu6(x + 3.) / 6.
  29. # 定义SE模块
  30. class SqueezeExcite(nn.Module):
  31. def __init__(self, in_chs, se_ratio=0.25, reduced_base_chs=None, act_layer=nn.ReLU, gate_fn=hard_sigmoid, divisor=4, **_):
  32. super(SqueezeExcite, self).__init__()
  33. self.gate_fn = gate_fn
  34. reduced_chs = _make_divisible((reduced_base_chs or in_chs) * se_ratio, divisor)
  35. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  36. self.conv_reduce = nn.Conv2d(in_chs, reduced_chs, 1, bias=True)
  37. self.act1 = act_layer(inplace=True)
  38. self.conv_expand = nn.Conv2d(reduced_chs, in_chs, 1, bias=True)
  39. def forward(self, x):
  40. x_se = self.avg_pool(x)
  41. x_se = self.conv_reduce(x_se)
  42. x_se = self.act1(x_se)
  43. x_se = self.conv_expand(x_se)
  44. x = x * self.gate_fn(x_se)
  45. return x
  46. # 定义基本卷积模块
  47. class ConvBnAct(nn.Module):
  48. def __init__(self, in_chs, out_chs, kernel_size, stride=1, act_layer=nn.ReLU):
  49. super(ConvBnAct, self).__init__()
  50. self.conv = nn.Conv2d(in_chs, out_chs, kernel_size, stride, kernel_size//2, bias=False)
  51. self.bn1 = nn.BatchNorm2d(out_chs, momentum=momentum)
  52. self.act1 = act_layer(inplace=True)
  53. def forward(self, x):
  54. x = self.conv(x)
  55. x = self.bn1(x)
  56. x = self.act1(x)
  57. return x
  58. # 定义ghost模块
  59. class GhostModule(nn.Module):
  60. def __init__(self, inp, oup, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=True):
  61. super(GhostModule, self).__init__()
  62. self.oup = oup
  63. init_channels = math.ceil(oup / ratio)
  64. new_channels = init_channels*(ratio-1)
  65. self.primary_conv = nn.Sequential(
  66. nn.Conv2d(inp, init_channels, kernel_size, stride, kernel_size//2, bias=False),
  67. nn.BatchNorm2d(init_channels, momentum=momentum),
  68. nn.ReLU(inplace=True) if relu else nn.Sequential(),
  69. )
  70. self.cheap_operation = nn.Sequential(
  71. nn.Conv2d(init_channels, new_channels, dw_size, 1, dw_size//2, groups=init_channels, bias=False),
  72. nn.BatchNorm2d(new_channels, momentum=momentum),
  73. nn.ReLU(inplace=True) if relu else nn.Sequential(),
  74. )
  75. def forward(self, x):
  76. x1 = self.primary_conv(x)
  77. x2 = self.cheap_operation(x1)
  78. out = torch.cat([x1,x2], dim=1)
  79. return out[:,:self.oup,:,:]
  80. # 定义ghost网络基本单元
  81. class GhostBottleneck(nn.Module):
  82. """ Ghost bottleneck w/ optional SE"""
  83. def __init__(self, in_chs, mid_chs, out_chs, dw_kernel_size=3,
  84. stride=1, act_layer=nn.ReLU, se_ratio=0.):
  85. super(GhostBottleneck, self).__init__()
  86. has_se = se_ratio is not None and se_ratio > 0.
  87. self.stride = stride
  88. # Point-wise expansion
  89. self.ghost1 = GhostModule(in_chs, mid_chs, relu=True)
  90. # Depth-wise convolution
  91. if self.stride > 1:
  92. self.conv_dw = nn.Conv2d(mid_chs, mid_chs, dw_kernel_size, stride=stride,
  93. padding=(dw_kernel_size-1)//2,
  94. groups=mid_chs, bias=False)
  95. self.bn_dw = nn.BatchNorm2d(mid_chs, momentum=momentum)
  96. # Squeeze-and-excitation
  97. if has_se:
  98. self.se = SqueezeExcite(mid_chs, se_ratio=se_ratio)
  99. else:
  100. self.se = None
  101. # Point-wise linear projection
  102. self.ghost2 = GhostModule(mid_chs, out_chs, relu=False)
  103. # shortcut
  104. if (in_chs == out_chs and self.stride == 1):
  105. self.shortcut = nn.Sequential()
  106. else:
  107. self.shortcut = nn.Sequential(
  108. nn.Conv2d(in_chs, in_chs, dw_kernel_size, stride=stride,
  109. padding=(dw_kernel_size-1)//2, groups=in_chs, bias=False),
  110. nn.BatchNorm2d(in_chs, momentum=momentum),
  111. nn.Conv2d(in_chs, out_chs, 1, stride=1, padding=0, bias=False),
  112. nn.BatchNorm2d(out_chs, momentum=momentum),
  113. )
  114. def forward(self, x):
  115. residual = x
  116. # 1st ghost bottleneck
  117. x = self.ghost1(x)
  118. # Depth-wise convolution
  119. if self.stride > 1:
  120. x = self.conv_dw(x)
  121. x = self.bn_dw(x)
  122. # Squeeze-and-excitation
  123. if self.se is not None:
  124. x = self.se(x)
  125. # 2nd ghost bottleneck
  126. x = self.ghost2(x)
  127. x += self.shortcut(residual)
  128. return x
  129. # 搭建ghost网络模型,整个网络模型完全照搬mobilenetv3,仅仅只是更换了网络基本单元
  130. class GhostNet(nn.Module):
  131. def __init__(self, cfgs, num_classes=1000, width=1.0, dropout=0.2):
  132. super(GhostNet, self).__init__()
  133. # setting of inverted residual blocks
  134. self.cfgs = cfgs
  135. self.dropout = dropout
  136. # building first layer
  137. output_channel = _make_divisible(16 * width, 4)
  138. self.conv_stem = nn.Conv2d(3, output_channel, 3, 2, 1, bias=False)
  139. self.bn1 = nn.BatchNorm2d(output_channel, momentum=momentum)
  140. self.act1 = nn.ReLU(inplace=True)
  141. input_channel = output_channel
  142. # building inverted residual blocks
  143. stages = []
  144. block = GhostBottleneck
  145. for cfg in self.cfgs:
  146. layers = []
  147. for k, exp_size, c, se_ratio, s in cfg:
  148. output_channel = _make_divisible(c * width, 4)
  149. hidden_channel = _make_divisible(exp_size * width, 4)
  150. layers.append(block(input_channel, hidden_channel, output_channel, k, s,
  151. se_ratio=se_ratio))
  152. input_channel = output_channel
  153. stages.append(nn.Sequential(*layers))
  154. output_channel = _make_divisible(exp_size * width, 4)
  155. stages.append(nn.Sequential(ConvBnAct(input_channel, output_channel, 1)))
  156. input_channel = output_channel
  157. self.blocks = nn.Sequential(*stages)
  158. # building last several layers
  159. output_channel = 1280
  160. self.global_pool = nn.AdaptiveAvgPool2d((1, 1))
  161. self.conv_head = nn.Conv2d(input_channel, output_channel, 1, 1, 0, bias=True)
  162. self.act2 = nn.ReLU(inplace=True)
  163. self.classifier = nn.Linear(output_channel, num_classes)
  164. def forward(self, x):
  165. x = self.conv_stem(x)
  166. x = self.bn1(x)
  167. x = self.act1(x)
  168. x = self.blocks(x)
  169. x = self.global_pool(x)
  170. x = self.conv_head(x)
  171. x = self.act2(x)
  172. x = x.view(x.size(0), -1)
  173. if self.dropout > 0.:
  174. x = F.dropout(x, p=self.dropout, training=self.training)
  175. x = self.classifier(x) # 最后的输出层并不包含激活函数,直接就是全链接的输出,在损失函数中包含softmax操作,实际使用需要自己再加一个softmax
  176. return x
  177. def ghostnet(**kwargs):
  178. """
  179. Constructs a GhostNet model
  180. """
  181. cfgs = [
  182. # k, t, c, SE, s
  183. # stage1
  184. [[3, 16, 16, 0, 1]],
  185. # stage2
  186. [[3, 48, 24, 0, 2]],
  187. [[3, 72, 24, 0, 1]],
  188. # stage3
  189. [[5, 72, 40, 0.25, 2]],
  190. [[5, 120, 40, 0.25, 1]],
  191. # stage4
  192. [[3, 240, 80, 0, 2]],
  193. [[3, 200, 80, 0, 1],
  194. [3, 184, 80, 0, 1],
  195. [3, 184, 80, 0, 1],
  196. [3, 480, 112, 0.25, 1],
  197. [3, 672, 112, 0.25, 1]
  198. ],
  199. # stage5
  200. [[5, 672, 160, 0.25, 2]],
  201. [[5, 960, 160, 0, 1],
  202. [5, 960, 160, 0.25, 1],
  203. [5, 960, 160, 0, 1],
  204. [5, 960, 160, 0.25, 1]
  205. ]
  206. ]
  207. return GhostNet(cfgs, **kwargs)
  208. if __name__=='__main__':
  209. model = ghostnet()
  210. model.eval()
  211. print(model)
  212. input = torch.randn(32,3,320,256)
  213. y = model(input)
  214. print(y.size())

包含训练和测试的完整代码见:https://github.com/LegendBIT/torch-classification-model

参考

0. GhostNet学习笔记

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/爱喝兽奶帝天荒/article/detail/806778
推荐阅读
相关标签
  

闽ICP备14008679号