当前位置:   article > 正文

word2vec代码_Word2Vec 代码-详细注释

word2vec代码

7fada624479a4ab5ee3a5ebd9b31b456.png

前言

这是 Word2Vec 的 Skip-Gram 模型的代码 (Tensorflow 1.15.0),代码源自https://blog.csdn.net/just_sort/article/details/87886385,我加了注释。

数据集:http://mattmahoney.net/dc/text8.zip

导入包

  1. import collections
  2. import math
  3. import os
  4. import random
  5. import zipfile
  6. import numpy as np
  7. import urllib
  8. import tensorflow as tf
  9. import numpy as np
  10. import matplotlib.pyplot as plt

数据预处理

这部分主要是下载解压数据,构建字典,给每个单词编号。

  1. filename = "./text8.zip"
  2. # 解压文件,并使用tf.compat.as_str将数据转成单词的列表
  3. with zipfile.ZipFile(filename) as f:
  4. result = f.read(f.namelist()[0]) # <class 'bytes'> 读入二进制数据
  5. document = tf.compat.as_str(result) # 将二进制数据转成字符串
  6. # print(document[:50]) # anarchism originated as a term of abuse first use
  7. words = document.split() # 按空白字符切分单词
  8. # print(len(words)) # 17005207个单词

50000个单词,第一个是'UNK',其他是最高频的前(50000-1)个单词 'unknown' 代指最高频的前(50000-1)个单词之外的所有单词,统计其数量 给单词编号:'unknown'是0号,其他按频率排序,频率越高,编号越前面 把语料库从单词列表转到编号列表

  1. #创建vocabulary词汇表
  2. vocabulary_size = 50000
  3. # 统计单词列表中单词的频数
  4. fre = collections.Counter(words)
  5. # 使用most_common方法获取top 50000频数的单词作为vocabulary
  6. top = fre.most_common(vocabulary_size - 1)
  7. count = [['UNK', -1]] # 列表中每个元素为[单词,频率]
  8. count.extend(top)
  9. # 创建一个dict,将top 50000词汇的放入dictionary中
  10. dictionary = dict()
  11. i = 0
  12. for word, _ in count:
  13. dictionary[word] = i #给top单词编号
  14. i += 1
  15. # 单词索引是键,单词频率是值
  16. reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()) )
  17. # top 50000词汇之外的单词,我们认定其为Unknown(未知),将其编号为0,并统计这类词汇的数量。
  18. unk_count = 0
  19. data = list() # 所有单词的索引
  20. for word in words:
  21. if word in dictionary.keys():
  22. index = dictionary[word] # top单词的索引
  23. else:
  24. index = 0 # 非top单词的索引
  25. unk_count += 1 # 非top单词的数量
  26. data.append(index)
  27. count[0][1] = unk_count

删除原始单词列表,可以节约内存

  1. del words # 删除原始单词列表,可以节约内存
  2. # 查看前10个出现最频繁的单词
  3. for i, j in count[:10]:
  4. print("words:{}, count:{}".format(i,j))

效果

  1. words:UNK, count:418391
  2. words:the, count:1061396
  3. words:of, count:593677
  4. words:and, count:416629
  5. words:one, count:411764
  6. words:in, count:372201
  7. words:a, count:325873
  8. words:to, count:316376
  9. words:zero, count:264975
  10. words:nine, count:250430

样本和标签

从语料库的第skip_window个单词开始,每一个单词都可以作为目标单词,并为每个目标单词产生num_skips个样本;每个样本的特征是目标单词,标签是其skip_window范围内的语境单词。

  1. #生成Word2Vec的样本
  2. cur = 0 # 当前的单词指针
  3. # 生成训练用的batch数据
  4. def generate_batch(batch_size, num_skips, skip_window):
  5. '''
  6. 参数
  7. batch_size为batch的大小;必须是它的整数倍(确保每个batch包含了一个词汇对应的所有样本)
  8. skip_window指单词最远可以联系的距离;设为1代表只能跟紧邻的2个单词生成样本。
  9. num_skips为对每个单词生成多少个样本,它不能大于skip_windows的2倍
  10. '''
  11. # 定义单词指针cur为global变量
  12. # 因为会反复调用generate_batch,所以要确保cur可以在函数generate_batch中被修改
  13. global cur
  14. # batch_size必须是它的整数倍(确保每个batch包含了一个词汇对应的所有样本)
  15. assert batch_size % num_skips == 0
  16. # num_skips 不能大于skip_windows的2
  17. assert num_skips <= 2 * skip_window
  18. # span为对某个单词创建相关样本时会用到的单词数量,包括目标单词本身和它前后的单词
  19. # 因此span=2*skip_windows+1
  20. span = 2 * skip_window + 1
  21. batch = np.ndarray(shape=(batch_size), dtype=np.int32)
  22. labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  23. # 创建一个最大容量为span的deque,即双向队列
  24. # 在对deque使用append方法添加变量时,只会保留最后插入的span个变量
  25. buffer = collections.deque(maxlen=span)
  26. # 从指针cur开始,把span个单词顺序读入buffer作为初始值。
  27. for _ in range(span):
  28. data_index = data[cur] # 逐个取出单词的索引
  29. buffer.append(data_index)
  30. cur = (cur + 1) % len(data)
  31. for i in range(batch_size // num_skips):
  32. # buffer中第skip_window个变量为目标单词
  33. target = skip_window
  34. # 非标签的单词列表targets_to_avoid,一开始包括第skip_window个单词(即目标单词)
  35. # 因为我们要预测的是语境单词,不包括目标单词本身
  36. target_to_avoid = [skip_window]
  37. ########## 为目标单词产生 num_skips 个样本
  38. for j in range(num_skips):
  39. # 产生一个语境单词的索引
  40. while target in target_to_avoid:
  41. # 在[0, span-1]中随机产生一个整数
  42. target = random.randint(0, span - 1)
  43. # 每个样本的特征就是目标单词
  44. batch[i * num_skips + j] = buffer[skip_window]
  45. # 每个样本的标签是语境单词
  46. labels[i * num_skips + j, 0] = buffer[target]
  47. # 已经使用过的语境单词不会再被使用
  48. target_to_avoid.append(target)
  49. # 我们再读入下一个单词(同时会抛掉buffer中第一个单词)
  50. buffer.append(data[cur])
  51. # 单词指针后移
  52. cur = (cur + 1) % len(data)
  53. # 两层循环完成后,我们已经获得了batch_size个样本
  54. return batch, labels

调用generate_batch函数简单测试一下功能

  1. #调用generate_batch函数简单测试一下功能
  2. batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
  3. for i in range(8):
  4. print("目标单词编号:{:<10}, 目标单词:{:<10}".format(batch[i], reverse_dictionary[batch[i]]),
  5. "语境单词编号:{:<10}, 语境单词:{:<10}".format(labels[i, 0], reverse_dictionary[labels[i, 0]]))

效果

  1. 目标单词编号:3081 , 目标单词:originated 语境单词编号:5234 , 语境单词:anarchism
  2. 目标单词编号:3081 , 目标单词:originated 语境单词编号:12 , 语境单词:as
  3. 目标单词编号:12 , 目标单词:as 语境单词编号:6 , 语境单词:a
  4. 目标单词编号:12 , 目标单词:as 语境单词编号:3081 , 语境单词:originated
  5. 目标单词编号:6 , 目标单词:a 语境单词编号:12 , 语境单词:as
  6. 目标单词编号:6 , 目标单词:a 语境单词编号:195 , 语境单词:term
  7. 目标单词编号:195 , 目标单词:term 语境单词编号:2 , 语境单词:of
  8. 目标单词编号:195 , 目标单词:term 语境单词编号:6 , 语境单词:a

验证集

  1. # 用来抽取的验证单词数
  2. valid_size = 16
  3. # 指验证单词只从频数最高的100个单词中抽取
  4. valid_window = 100
  5. valid_examples = np.random.choice(valid_window, valid_size, replace=False)

模型和训练

模型中,最核心的函数是 tf.nn.nce_loss() 函数的计算,但这一部分被封装好了,要研究下源码才能更了解该算法。代码还取了16个验证单词,通过计算单词之间的相似性,找出它最相似的8个单词,可以验证模型的效果。 定义超参数

  1. # 定义最大的迭代次数为10万次
  2. num_steps = 100001
  3. # 最远可以联系的距离
  4. skip_window = 1
  5. # 对每个目标单词提取的样本数
  6. num_skips = 2
  7. # embedding_size即将单词转换为稠密向量的维度,一般是50-1000这个范围内的值
  8. embedding_size = 128
  9. # 每个批量的样本
  10. batch_size = 128
  11. # 训练时用来做负样本的噪声单词的数量
  12. num_sampled = 64

构建模型和训练

  1. '''
  2. 50000个单词中每个都有一个随机产生的128维向量
  3. '''
  4. graph = tf.Graph()
  5. with graph.as_default():
  6. train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
  7. train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  8. valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  9. with tf.device('/cpu:0'):
  10. # 50000个单词中每个对应一个随机产生的128维向量
  11. embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
  12. # 查找输入train_inputs对应的向量embed
  13. embed = tf.nn.embedding_lookup(embeddings, train_inputs)
  14. # 权重和偏置
  15. nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0/math.sqrt(embedding_size)))
  16. nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
  17. # 计算损失:这里是Word2vec的核心
  18. loss = tf.reduce_mean(
  19. tf.nn.nce_loss(weights=nce_weights,
  20. biases=nce_biases,
  21. inputs = embed,
  22. labels=train_labels,
  23. num_sampled=num_sampled,
  24. num_classes=vocabulary_size
  25. ))
  26. # 定义优化器为SGD,且学习速率为1.0
  27. optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
  28. # 计算词向量embeddings的L2范数norm
  29. norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  30. # 将embeddings除以其L2范数得到标准化后的normalized_embeddings
  31. normalized_embeddings = embeddings / norm
  32. # 查询验证单词的嵌入向量
  33. valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
  34. # 计算验证单词与词汇表中所有单词的相似性
  35. similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
  36. ####################### 训练
  37. # 使用tf.global_variables_initializer初始化所有模型的参数
  38. init = tf.global_variables_initializer()
  39. with tf.Session(graph=graph) as session:
  40. init.run()
  41. print('Initialized')
  42. average_loss = 0
  43. for step in range(1,num_steps+1):
  44. batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
  45. feed_dict = {train_inputs:batch_inputs, train_labels:batch_labels}
  46. _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
  47. average_loss += loss_val
  48. if (step%2000==0):
  49. average_loss /= 2000
  50. print("Average loss at step {} : {}".format(step, average_loss))
  51. average_loss = 0
  52. if (step%10000==0):
  53. # 计算一次验证单词与全部单词的相似度
  54. sim = similarity.eval()
  55. # 将与每个验证单词最终相似的8个单词展示出来。
  56. for i in range(valid_size):
  57. valid_word = reverse_dictionary[valid_examples[i]]
  58. top_k = 8
  59. nearest = (-sim[i,:]).argsort()[1:top_k+1]
  60. log_str = "Nearest to %s :" % valid_word
  61. for k in range(top_k):
  62. ID = nearest[k]
  63. close_word = reverse_dictionary[ID]
  64. log_str = "%s %s ," % (log_str, close_word)
  65. print(log_str)
  66. final_embeddings = normalized_embeddings.eval()

结果

  1. Initialized
  2. Average loss at step 2000 : 114.31209418106079
  3. Average loss at step 4000 : 52.73329355382919
  4. Average loss at step 6000 : 33.614175471544264
  5. Average loss at step 8000 : 23.143466392278672
  6. Average loss at step 10000 : 17.812667747735976
  7. Nearest to up : tourist , pertain , fragments , northeast , electricity , negro , agreed , akita ,
  8. Nearest to at : in , poison , and , impacts , arno , of , for , beloved ,
  9. Nearest to been : agents , arno , install , bp , winter , lerner , gesch , is ,
  10. Nearest to they : arno , bits , front , attractive , contributing , not , pardoned , conservatoire ,
  11. Nearest to were : thyestes , are , and , bei , arno , is , in , refugees ,
  12. Nearest to about : arno , hope , hexagonal , bei , people , whole , neglected , jethro ,
  13. Nearest to war : drivers , going , amalthea , originally , antares , music , acetylene , sexuality ,
  14. Nearest to but : bei , and , potency , continued , treated , bradley , lyric , mosque ,
  15. Nearest to with : and , from , in , for , bei , cajun , housman , refugees ,
  16. Nearest to this : amalthea , it , them , the , legislative , a , basilica , feminine ,
  17. Nearest to time : UNK , bits , unintended , interrupt , goddess , secretary , cider , desktop ,
  18. Nearest to many : processor , horse , akita , arno , workbench , efficacy , draught , trials ,
  19. Nearest to nine : zero , eight , cider , three , six , akita , abingdon , arno ,
  20. Nearest to during : truncated , cis , arno , who , rushton , of , latitude , minorities ,
  21. Nearest to six : nine , eight , three , tuned , arno , zero , schema , cider ,
  22. Nearest to other : akita , decay , one , wyoming , residential , arbitration , film , country ,
  23. ......
  24. Average loss at step 92000 : 4.7263513580560685
  25. Average loss at step 94000 : 4.664387725114822
  26. Average loss at step 96000 : 4.719061789274216
  27. Average loss at step 98000 : 4.678433487653733
  28. Average loss at step 100000 : 4.569495327591896
  29. Nearest to up : out , off , them , clout , stroustrup , trusted , yin , cubs ,
  30. Nearest to at : in , during , arno , under , michelob , on , netbios , thaler ,
  31. Nearest to been : be , become , was , had , were , by , geoff , attending ,
  32. Nearest to they : we , he , there , you , she , it , who , not ,
  33. Nearest to were : are , was , have , had , those , be , neutronic , refugees ,
  34. Nearest to about : unital , microcebus , dinar , thibetanus , hope , that , arno , browns ,
  35. Nearest to war : incarnation , measurement , scoping , consult , geralt , sissy , rquez , tamarin ,
  36. Nearest to but : however , and , vulpes , bei , netbios , although , neutronic , microcebus ,
  37. Nearest to with : between , widehat , while , prism , against , in , filings , vulpes ,
  38. Nearest to this : it , which , the , amalthea , michelob , vulpes , thibetanus , that ,
  39. Nearest to time : lemmy , vulpes , glamour , secretary , fv , gcl , approximating , dragoon ,
  40. Nearest to many : some , several , these , quagga , most , other , mico , all ,
  41. Nearest to nine : eight , seven , six , five , four , zero , three , michelob ,
  42. Nearest to during : after , in , at , when , under , ssbn , cebus , since ,
  43. Nearest to six : seven , eight , five , four , three , nine , zero , two ,
  44. Nearest to other : various , many , some , quagga , nuke , bangor , different , including ,

可视化

这里从生成的[50000, 128]的单词向量中,取100个高频词,用TSNE模块将其降低到2维,从而在坐标轴上可视化。

  1. def plot_embed(low_dim_embs, labels, filename='tsne.png'):
  2. '''
  3. low_dim_embs: 维度是n * 2,数据类型是二维数组
  4. labels: 维度是
  5. '''
  6. assert low_dim_embs.shape[0] >= len(labels), "More labels than embedding"
  7. plt.figure(figsize=(18,18))
  8. for i, label in enumerate(labels):
  9. x, y = low_dim_embs[i,:]
  10. plt.scatter(x, y)
  11. plt.annotate(label, xy=(x,y), xytext=(5,2),textcoords='offset points',
  12. ha='right', va='bottom')
  13. plt.savefig(filename)

绘图

  1. from sklearn.manifold import TSNE
  2. tsne = TSNE(perplexity=30, n_components=2, init='pca',n_iter=5000)
  3. # 只刻画100个单词
  4. plot_only = 100
  5. low_dim_embs = tsne.fit_transform(final_embeddings[1:plot_only+1, :])
  6. # 标签是单词
  7. labels = [reverse_dictionary[i] for i in range(1,plot_only+1)]
  8. plot_embed(low_dim_embs, labels)

864cba65c19f0966e4db4b5023132f25.png
见封面
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/运维做开发/article/detail/917303
推荐阅读
  

闽ICP备14008679号