当前位置:   article > 正文

LangChain学习记录(二)Memory_langchain 实现多轮对话

langchain 实现多轮对话

        刚接触语言大模型时,对话过程中大模型表现出的“记忆力”令人惊叹。这种记忆力原理是什么,在langchain中是如何实现的,就是本篇笔记的内容了。

一、对话Memory的实现

        与大模型进行多轮对话时,并不是前面的对话真的被大模型记住了,其实还是通过prompt来实现的。也就是在进行多轮对话时,在没有令牌限制或其他限制的情况下,会将前面的对话内容放进目前对话的prompt中。看一下langchain中的实现。

  1. import os
  2. import openai
  3. from langchain_community.chat_models import ChatOpenAI
  4. from langchain.chains import ConversationChain
  5. from langchain.memory import ConversationBufferMemory
  6. os.environ["OPENAI_API_KEY"] = ''
  7. openai.api_key = os.environ.get("OPENAI_API_KEY")
  8. llm = ChatOpenAI(model_name = 'gpt-3.5-turbo',temperature = 0.0)
  9. memory = ConversationBufferMemory() # 全部存储
  10. conversation = ConversationChain(
  11. llm = llm,
  12. memory = memory,
  13. verbose = True # 开启查看每次prompt内容
  14. )
  15. while 1:
  16. content = input('user:')
  17. print(conversation.predict(input=content))

        langchain当中通过memory模块当中的一系列方法来实现多轮对话的记忆存储。其中的ConversationBufferMemory方法,会保存历史对话的所有内容。而启用的方式则是构建一个对话的ConversationChain,将memory的方法作为参数传进去。还可以指定大模型、以及显示prompt的内容等。

  1. user:Hi,my name is Rain.
  2. > Entering new ConversationChain chain...
  3. Prompt after formatting:
  4. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  5. Current conversation:
  6. Human: Hi,my name is Rain.
  7. AI:
  8. > Finished chain.
  9. Hello Rain! It's nice to meet you. How can I assist you today?
  10. user:what is 1+1?
  11. > Entering new ConversationChain chain...
  12. Prompt after formatting:
  13. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  14. Current conversation:
  15. Human: Hi,my name is Rain.
  16. AI: Hello Rain! It's nice to meet you. How can I assist you today?
  17. Human: what is 1+1?
  18. AI:
  19. > Finished chain.
  20. 1 + 1 equals 2. Is there anything else you would like to know?
  21. user:what is my name?
  22. > Entering new ConversationChain chain...
  23. Prompt after formatting:
  24. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  25. Current conversation:
  26. Human: Hi,my name is Rain.
  27. AI: Hello Rain! It's nice to meet you. How can I assist you today?
  28. Human: what is 1+1?
  29. AI: 1 + 1 equals 2. Is there anything else you would like to know?
  30. Human: what is my name?
  31. AI:
  32. > Finished chain.
  33. Your name is Rain.

        另外,我们还可以利用memory.save_context方法在对话开始前就主动的预置一些对话内容到记忆中去

  1. memory.save_context({'input':'Hi,my name is Rain'},{'output':'Hello Rain! It\'s nice to meet you. How can I assist you today?'})
  2. >>
  3. user: what is my name?
  4. > Entering new ConversationChain chain...
  5. Prompt after formatting:
  6. The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
  7. Current conversation:
  8. Human: Hi,my name is Rain
  9. AI: Hello Rain! It's nice to meet you. How can I assist you today?
  10. Human: what is my name?
  11. AI:
  12. > Finished chain.
  13. Your name is Rain.

二、几种其他的memory

1.ConversationBufferWindowMemory

        这个方法不同于上文提到的方法,可以通过参数设置需要记忆的对话轮数

  1. from langchain.memory import ConversationBufferWindowMemory
  2. memory = ConversationBufferWindowMemory(k = 1) # K参数设置记忆的对话轮数

如设置k=1,就表示多轮对话中当前对话只保存上一轮对话的内容记忆

2.ConversationTokenBufferMemory

        这个方法可以设置最大令牌上限,其实也是限制记忆的数量,毕竟大模型都是按Token数量收费的,当对话轮数很多时,每次对话记忆耗费的Token数量也会变得很大。指定llm参数是因为每种模型的token计算方式不同。

  1. from langchain.memory import ConversationTokenBufferMemory
  2. memory = ConversationTokenBufferMemory(llm=llm,max_token_limit=30) # 设置令牌上限

3.ConversationSummaryBufferMemory

        这个方法会将之前所有的对话轮数根据你设置的令牌上限进行总结,保证新对话的prompt不会超过这个令牌上限,同时又能最大程度保存一些历史对话信息。

  1. from langchain.memory import ConversationSummaryBufferMemory
  2. memory = ConversationSummaryBufferMemory(llm=llm,max_token_limit=30)

...

三、小结

        通过对langchain实现对话记忆方法的学习,了解了大模型获得记忆能力的方法,对大模型的了解又进了一小步。。        

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/空白诗007/article/detail/853585
推荐阅读
相关标签
  

闽ICP备14008679号