当前位置:   article > 正文

VIT(Vision Transformer)笔记_viv transformer矩阵计算

viv transformer矩阵计算

1. Self Attention公式

先理解SoftMax(XX^{T})X,一个矩阵乘以它自己的转置,是在计算第一个行向量与自己的内积,表征两个向量的夹角,表征一个向量在另一个向量上的投影

投影的值大,说明两个向量相关度高

矩阵XX^{T}是一个方阵,我们以行向量的角度理解,里面保存了每个向量与自己和其他向量进行内积运算的结果。 

Softmax之后,这些数字的和为1了

之后再乘X矩阵

这个新的行向量就是"早"字词向量经过注意力机制加权求和之后的表示。 

对于QKV矩阵,其实都是X矩阵的线性表示,采用W矩阵,是为了提高矩阵的拟合能力。

对于\sqrt{d_{k}},是为了使方差变为1,使得模型稳定。

2. Multi-Head Attention

首先和self-attention一样,将a分成QKV,之后根据头数,将QKV均分

接着将每个head得到的结果进行concat拼接

接着将拼接后的结果与W融合

 代码实现如下

  1. class Attention(nn.Module):
  2. def __init__(self,
  3. dim, # 输入token的dim
  4. num_heads=8,
  5. qkv_bias=False,
  6. qk_scale=None,
  7. attn_drop_ratio=0.,
  8. proj_drop_ratio=0.):
  9. super(Attention, self).__init__()
  10. self.num_heads = num_heads
  11. head_dim = dim // num_heads
  12. self.scale = qk_scale or head_dim ** -0.5
  13. self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
  14. self.attn_drop = nn.Dropout(attn_drop_ratio)
  15. self.proj = nn.Linear(dim, dim)
  16. self.proj_drop = nn.Dropout(proj_drop_ratio)
  17. def forward(self, x):
  18. # [batch_size, num_patches + 1, total_embed_dim]
  19. B, N, C = x.shape
  20. # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]
  21. # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]
  22. # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  23. qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
  24. # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  25. q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
  26. # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]
  27. # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]
  28. attn = (q @ k.transpose(-2, -1)) * self.scale
  29. attn = attn.softmax(dim=-1)
  30. attn = self.attn_drop(attn)
  31. # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  32. # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]
  33. # reshape: -> [batch_size, num_patches + 1, total_embed_dim]
  34. x = (attn @ v).transpose(1, 2).reshape(B, N, C)
  35. x = self.proj(x)
  36. x = self.proj_drop(x)
  37. return x

3. Embedding层

将图片划分成一堆Patches

将输入图片(224x224)按照16x16大小的Patch进行划分,划分后会得到(224/16)^2 = 196个patch,每个patch的shape的大小则为【16,16,3】,之后将其拉成16*16*3=768的向量(token)

 代码实现如下

  1. class PatchEmbed(nn.Module):
  2. """
  3. 2D Image to Patch Embedding
  4. """
  5. def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768):
  6. super().__init__()
  7. img_size = (img_size, img_size)
  8. patch_size = (patch_size, patch_size)
  9. self.img_size = img_size
  10. self.patch_size = patch_size
  11. #224/16=14,224/16=14
  12. self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
  13. #14*14=196
  14. self.num_patches = self.grid_size[0] * self.grid_size[1]
  15. self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
  16. def forward(self, x):
  17. B, C, H, W = x.shap
  18. # flatten: [B, C, H, W] -> [B, C, HW] B*768*196
  19. # transpose: [B, C, HW] -> [B, HW, C] B*196*768
  20. x = self.proj(x).flatten(2).transpose(1, 2)
  21. x = self.norm(x)
  22. return x

在Embedding层后,需增加一个Positional Encoding【196,768】->【197,768】 

  1. # 定义一个可学习的Class token
  2. self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) # 第一个1为batch_size embed_dim=768
  3. cls_token = self.cls_token.expand(x.shape[0], -1, -1) # 保证cls_token的batch维度和x一致
  4. if self.dist_token is None:
  5. x = torch.cat((cls_token, x), dim=1) # [B, 197, 768] self.dist_token为None,会执行这句
  6. else:
  7. x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)

之后再加上一个位置编码

  1. # 定义一个可学习的位置编码
  2. self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim)) #这个维度为(1,197,768)
  3. x = x + self.pos_embed

4. Transformer Encoder

其实就是重复堆叠Encoder Block L次

5. MLP Head 

从【197,768】中取出【1,768】,然后进行分类

6. 网络结构

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/人工智能uu/article/detail/764319
推荐阅读
相关标签
  

闽ICP备14008679号