当前位置:   article > 正文

使用sse技术构建chatgpt应用_langchain sse

langchain sse

最近流行的ChatGPT,好奇他的流文字是怎么传输,所以去研究了,并复现了一下。

后端用的是langchain+fastapi,用到了starlette的一个插件包,sse_starlette返回

先定义langchain的CallbackHandler:

  1. import queue
  2. import sys
  3. from typing import Any, Dict, List, Union
  4. from langchain.callbacks.base import BaseCallbackHandler
  5. from langchain.schema import LLMResult
  6. class StreamingCallbackHandler(BaseCallbackHandler):
  7. def __init__(self):
  8. self.tokens = queue.Queue()
  9. self.stream_end_flag = False
  10. super(BaseCallbackHandler, self).__init__()
  11. def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
  12. self.tokens.put(token)
  13. sys.stdout.write(token)
  14. sys.stdout.flush()
  15. def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
  16. self.tokens.put(StopIteration)
  1. import asyncio
  2. from fastapi import FastAPI
  3. from typing import Annotated
  4. from langchain.llms import OpenAI
  5. from sse_starlette.sse import EventSourceResponse
  6. from stream_callback import StreamingCallbackHandler
  7. app = FastAPI()
  8. @app.post('/simpleChat', response_class=EventSourceResponse)
  9. async def simple_chat(data: Annotated[dict, Body()]):
  10. app_input = data.get('appInput')
  11. callback_handler = StreamingCallbackHandler()
  12. chat_prompt = PromptTemplate(
  13. input_variables=['human_input'],
  14. template='''{human_input}'''
  15. )
  16. chain = LLMChain(
  17. llm=OpenAI(
  18. temperature=0.8,
  19. request_timeout=setting.REQUEST_TIMEOUT,
  20. max_retries=1,
  21. max_tokens=2048,
  22. streaming=True,
  23. ),
  24. prompt=chat_prompt
  25. )
  26. task = chain.aapplly([{'human_input': app_input}], callbacks=[callback_handler])
  27. loop = asyncio.get_event_loop()
  28. asyncio.run_coroutine_threadsafe(task, loop)
  29. def resp():
  30. while True:
  31. try:
  32. tk = callback_handler.tokens.get()
  33. if tk is StopIteration: raise tk
  34. yield tk
  35. except StopIteration:
  36. raise StopIteration
  37. return EventSourceResponse(resp())

前端用的是vue, 由于源生sse并不支持post的方式请求,因此使用fetch-event-source包进行post的请求。

npm install @microsoft/fetch-event-source  # 使用npm工具安装
  1. <template>
  2. <div>
  3. <span>{{ content }}</span>
  4. </div>
  5. <div>
  6. <el-form :model="form">
  7. <el-form-item>
  8. <el-input v-model="form.appInput" />
  9. <el-button type="primary" @click="submitChat"/>
  10. </el-form-item>
  11. </el-form>
  12. </div>
  13. </template>
  14. <script setup lang='ts'>
  15. import {fetchEventSource} from "@microsoft/fetch-event-source"
  16. const form = reactive({
  17. appInput: ''
  18. });
  19. const content = ref<string>('')
  20. const submitChat = () => {
  21. if (form.appInput !== ''){
  22. content.value = ''
  23. fetchEventSource('/api/v1/simpleChat', {
  24. method: 'POST',
  25. headers: {
  26. 'Content-Type': 'application/json',
  27. },
  28. body:JSON.stringify({
  29. chatInput: form.appInput,
  30. }),
  31. onmessage(ev) {
  32. content.value+=ev.data
  33. }
  34. })
  35. }
  36. }
  37. </script>

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/木道寻08/article/detail/880324
推荐阅读
相关标签
  

闽ICP备14008679号