当前位置:   article > 正文

imdb数据集_基于循环神经网络(RNN)-IMDB影评情感分析Word2Vec & BiLSTM

word2vec情感分类imdb

497fb09138d34ba5af30e67b6158a86a.png

一、背景意义

NLP自然语言领域是目前非常热门的人工智能研究方向与产品方向。属于NLP自然语言技术处理领域,目前有一些比较成熟产品,例如,科大飞讯多语言翻译、苹果siri、微软cortana、小米小爱同学、百度小度小度等。本项目采用LSTM(长短记忆模型)与Word2Vec词向量计算的工具来实现一个IMDB影评的文本情感分类的模型并输出预测判断。

二、数据准备

项目地址:

IDBM_Movie | Kaggle​www.kaggle.com

下载包含 labeledTraData.tsv、imdb_master.csv、TestData.tsv三个数据集,如下图

318a8e293e43a940077b3f42758cffbc.png

三、数据分析与建模

1、导入模块、读取数据

import 

读取数据

  1. df1 = pd.read_csv('labeledTrainData.tsv', delimiter="t")
  2. df1 = df1.drop(['id'], axis=1)
  3. df1.head()

95ec69917411423ca372599ee64b0856.png
  1. df2 = pd.read_csv('imdb_master.csv',encoding="latin-1")
  2. df2.head()

16194ce996e950b84bbe803d0e7dc4f3.png

2、数据预处理

  1. df2 = df2.drop(['Unnamed: 0','type','file'],axis=1)
  2. df2.columns = ["review","sentiment"]
  3. df2.head()

2fbf6f46adc99a00a6ba3cdcb31282b6.png
  1. df2 = df2[df2.sentiment != 'unsup']
  2. df2['sentiment'] = df2['sentiment'].map({'pos': 1, 'neg': 0})
  3. df2.head()

9fb1e7cd6f6e58f0995653f5300ea4ca.png
  1. #合并数据
  2. df = pd.concat([df1, df2]).reset_index(drop=True)
  3. df.head()

6509799e5f17601c0432ed648e3381b7.png
  1. #观察数据整体情况
  2. df.info()
  3. ------------------------------------
  4. <class 'pandas.core.frame.DataFrame'>
  5. RangeIndex: 75000 entries, 0 to 74999
  6. Data columns (total 2 columns):
  7. review 75000 non-null object
  8. sentiment 75000 non-null int64
  9. dtypes: int64(1), object(1)
  10. memory usage: 1.1+ MB


3、数据可视化

  1. #数据可视化sentiment比例分布
  2. plt.hist(df[df.sentiment == 1].sentiment,
  3. bins=2, color='green', label='Positive')
  4. plt.hist(df[df.sentiment == 0].sentiment,
  5. bins=2, color='blue', label='Negative')
  6. plt.title('Classes distribution in the train data', fontsize=MEDIUM_SIZE)
  7. plt.xticks([])
  8. plt.xlim(-0.5, 2)
  9. plt.legend()
  10. plt.show()

36551ed3c7c88f6195e55c828c5a34d0.png

4、创建模型

4.1词向量处理计算

  1. import nltk#安装导入词向量计算工具
  2. stop_words = set(stopwords.words("english")) #停词
  3. lemmatizer = WordNetLemmatizer()#提取单词的主干
  4. def clean_text(text):
  5. # 用正则表达式取出符合规范的部分
  6. text = re.sub(r'[^ws]','',text, re.UNICODE)
  7. ##小写化所有的词,并转成词list
  8. text = text.lower()
  9. ##第一个参数表示待处理单词,必须是小写的;第二个参数表示POS,默认为NOUN
  10. text = [lemmatizer.lemmatize(token) for token in text.split(" ")]
  11. text = [lemmatizer.lemmatize(token, "v") for token in text]
  12. text = [word for word in text if not word in stop_words]
  13. text = " ".join(text)
  14. return text
  15. '''
  16. def lemmatize(tokens: list) -> list:
  17. # 1. Lemmatize 词形还原 去掉单词的词缀 比如,单词“cars”词形还原后的单词为“car”,单词“ate”词形还原后的单词为“eat”
  18. tokens = list(map(lemmatizer.lemmatize, tokens))
  19. lemmatized_tokens = list(map(lambda x: lemmatizer.lemmatize(x, "v"), tokens))
  20. # 2. Remove stop words 删除停用词
  21. meaningful_words = list(filter(lambda x: not x in stop_words, lemmatized_tokens))
  22. return meaningful_words
  23. def preprocess(review: str, total: int, show_progress: bool = True) -> list:
  24. if show_progress:
  25. global counter
  26. counter += 1
  27. print('Processing... %6i/%6i'% (counter, total), end='r')
  28. # 1. Clean text
  29. review = clean_review(review)
  30. # 2. Split into individual words
  31. tokens = word_tokenize(review)
  32. # 3. Lemmatize
  33. lemmas = lemmatize(tokens)
  34. # 4. Join the words back into one string separated by space,
  35. # and return the result.
  36. return lemmas
  37. '''

ab7ceeb6e8bc1ce3eeb0ffb9bbbc861e.png

4.2review字段引用词向量处理clean_text(x)函数

  1. df['Processed_Reviews'] = df.review.apply(lambda x: clean_text(x))
  2. df.head()
  3. df.Processed_Reviews.apply(lambda x: len(x.split(" "))).mean()

8035cc1c06cea0b4da9e85358508a564.png

128.51009333333334

4.2LSTM建模训练

int:

  1. from keras.preprocessing.text import Tokenizer
  2. from keras.preprocessing.sequence import pad_sequences
  3. from keras.layers import Dense , Input , LSTM , Embedding, Dropout , Activation, GRU, Flatten
  4. from keras.layers import Bidirectional, GlobalMaxPool1D
  5. from keras.models import Model, Sequential
  6. from keras.layers import Convolution1D
  7. from keras import initializers, regularizers, constraints, optimizers, layers
  8. max_features = 6000
  9. tokenizer = Tokenizer(num_words=max_features)
  10. tokenizer.fit_on_texts(df['Processed_Reviews'])
  11. list_tokenized_train = tokenizer.texts_to_sequences(df['Processed_Reviews'])
  12. maxlen = 130
  13. X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
  14. y = df['sentiment']
  15. embed_size = 128
  16. model = Sequential()
  17. model.add(Embedding(max_features, embed_size))
  18. model.add(Bidirectional(LSTM(32, return_sequences = True)))
  19. model.add(GlobalMaxPool1D())
  20. model.add(Dense(20, activation="relu"))
  21. model.add(Dropout(0.05))
  22. model.add(Dense(1, activation="sigmoid"))
  23. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  24. batch_size = 100
  25. epochs = 3
  26. model.fit(X_t,y, batch_size=batch_size, epochs=epochs, validation_split=0.2)

out:

训练结果:

1fc552f025a51994a72979fb6fa01ef2.png

四、评估预测

  1. df_test=pd.read_csv("testData.tsv",header=0, delimiter="t", quoting=3)
  2. df_test.head()
  3. df_test["review"]=df_test.review.apply(lambda x: clean_text(x))
  4. df_test["sentiment"] = df_test["id"].map(lambda x: 1 if int(x.strip('"').split("_")[1]) >= 5 else 0)
  5. y_test = df_test["sentiment"]
  6. list_sentences_test = df_test["review"]
  7. list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
  8. X_te = pad_sequences(list_tokenized_test, maxlen=maxlen)
  9. prediction = model.predict(X_te)
  10. y_pred = (prediction > 0.5)
  11. from sklearn.metrics import f1_score, confusion_matrix
  12. print('F1-score: {0}'.format(f1_score(y_pred, y_test)))
  13. print('Confusion matrix:')
  14. confusion_matrix(y_pred, y_test)

评估结果:0.952(F1-score是分类问题的一个衡量指标)

4e84248f0dd06f574c095b797d024519.png

生成文件提交结果:

  1. y_pred = model.predict(X_te)
  2. def submit(predictions):
  3. df_test['sentiment'] = predictions
  4. df_test.to_csv('submission.csv', index=False, columns=['id','sentiment'])
  5. submit(y_pred)

五、小结

本次项目通过大量的已知的高质量影评文本,利用机器学习LSTM神经网络构文本建情感分析分类模型,项目训练预测成果非常成功,解决了NLP自然语言处理技术-电影影评情感文本分类分析问题,让自己向NLP自然语言专家更进一步。

若是还想提高训练得分,可通过GridSearchCV调参提升,并且可以构建知识图谱图数据库等技术,加快学习效率并提升训练结果。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/空白诗007/article/detail/993006
推荐阅读
  

闽ICP备14008679号