赞
踩
官方文档的示例:
- >>> # an Embedding module containing 10 tensors of size 3
- >>> embedding = nn.Embedding(10, 3)
- >>> # a batch of 2 samples of 4 indices each
- >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
- >>> embedding(input)
- tensor([[[-0.0251, -1.6902, 0.7172],
- [-0.6431, 0.0748, 0.6969],
- [ 1.4970, 1.3448, -0.9685],
- [-0.3677, -2.7265, -0.1685]],
-
- [[ 1.4970, 1.3448, -0.9685],
- [ 0.4362, -0.4004, 0.9400],
- [-0.6431, 0.0748, 0.6969],
- [ 0.9124, -2.3616, 1.1151]]])
我不太懂的是定义完nn.Embedding(num_embeddings-词典长度,embedding_dim-向量维度)
之后,为什么就可以直接使用embedding(input)
进行输入。
我们来仔细看看:
>>> embedding = nn.Embedding(10, 3)
构造一个(假装)vocab size=10,每个vocab用3-d向量表示的table
- >>> embedding.weight
- Parameter containing:
- tensor([[ 1.2402, -1.0914, -0.5382],
- [-1.1031, -1.2430, -0.2571],
- [ 1.6682, -0.8926, 1.4263],
- [ 0.8971, 1.4592, 0.6712],
- [-1.1625, -0.1598, 0.4034],
- [-0.2902, -0.0323, -2.2259],
- [ 0.8332, -0.2452, -1.1508],
- [ 0.3786, 1.7752, -0.0591],
- [-1.8527, -2.5141, -0.4990],
- [-0.6188, 0.5902, -0.0860]], requires_grad=True)
可以看做每行是一个词汇的向量表示!
- >>> embedding.weight.size
- torch.Size([10, 3])
和nn.Embedding处的定义一致
- >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
- >>> input
- tensor([[1, 2, 4, 5],
- [4, 3, 2, 9]])
牢记:input是indices
- >>> input.shape
- torch.Size([2, 4])
Input size表示这批有2个句子,每个句子由4个单词构成
- >>> a = embedding(input)
- >>> a
- tensor([[[-1.1031, -1.2430, -0.2571],
- [ 1.6682, -0.8926, 1.4263],
- [-1.1625, -0.1598, 0.4034],
- [-0.2902, -0.0323, -2.2259]],
- [[-1.1625, -0.1598, 0.4034],
- [ 0.8971, 1.4592, 0.6712],
- [ 1.6682, -0.8926, 1.4263],
- [-0.6188, 0.5902, -0.0860]]], grad_fn=<EmbeddingBackward>)
a=embedding(input)
是去embedding.weight中取对应index的词向量!
看a的第一行,input处index=1,对应取出weight中index=1的那一行。其实就是按index取词向量!
- >>> a.size()
- torch.Size([2, 4, 3])
取出来之后编程2*4*3的张量。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。