当前位置:   article > 正文

LangChain 42 深入理解LangChain 表达式语言六 Runtime调用不同大模型LLM LangChain Expression Language (LCEL)_chatgenerationchunk

chatgenerationchunk

LangChain系列文章

  1. LangChain 29 调试Debugging 详细信息verbose
  2. LangChain 30 ChatGPT LLM将字符串作为输入并返回字符串Chat Model将消息列表作为输入并返回消息
  3. LangChain 31 模块复用Prompt templates 提示词模板
  4. LangChain 32 输出解析器Output parsers
  5. LangChain 33: LangChain表达语言LangChain Expression Language (LCEL)
  6. LangChain 34: 一站式部署LLMs使用LangServe
  7. LangChain 35: 安全最佳实践深度防御Security
  8. LangChain 36 深入理解LangChain 表达式语言优势一 LangChain Expression Language (LCEL)
  9. LangChain 37 深入理解LangChain 表达式语言二 实现prompt+model+output parser LangChain Expression Language (LCEL)
  10. LangChain 38 深入理解LangChain 表达式语言三 实现RAG检索增强生成 LangChain Expression Language (LCEL)
  11. LangChain 39 深入理解LangChain 表达式语言四 为什么要用LCEL LangChain Expression Language (LCEL)
  12. LangChain 40 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  13. LangChain 41 深入理解LangChain 表达式语言五 为什么要用LCEL调用大模型LLM LangChain Expression Language (LCEL)

在这里插入图片描述

1. Runtime configurability运行时可配置性

如果我们想在运行时使聊天模型或LLM的选择可配置化:

1.1 LCEL实现

通过对象configurable_chain的方法传递不同model参数,决定最终的调用模型。比如例子中分别掉了instruct和chat两种OpenAI Model。

from langchain_core.runnables import ConfigurableField
from langchain_core.runnables import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv

load_dotenv()

from langchain.globals import set_debug

set_debug(True)

prompt = ChatPromptTemplate.from_template(
    "Tell me a short joke about {topic}"
)
output_parser = StrOutputParser()
model = ChatOpenAI(model="gpt-3.5-turbo")
llm = OpenAI(model="gpt-3.5-turbo-instruct")

configurable_model = model.configurable_alternatives(
    ConfigurableField(id="model"), 
    default_key="chat_openai", 
    openai=llm,
)
configurable_chain = (
    {"topic": RunnablePassthrough()} 
    | prompt 
    | configurable_model 
    | output_parser
)

configurable_chain.invoke(
    "ice cream", 
    config={"model": "openai"}
)
stream = configurable_chain.stream(
    "spaghetti", 
    config={"model": "chat_openai"}
)
for chunk in stream:
    print(chunk, end="", flush=True)

# response = configurable_chain.batch(["ice cream", "spaghetti", "dumplings"])
response = configurable_chain.batch(["ice cream", "spaghetti"])
print('ice cream, spaghetti, dumplings, response >> ', response)

# await configurable_chain.ainvoke("ice cream")


# async def main():
#     await configurable_chain.ainvoke("ice cream")

# import asyncio
# asyncio.run(main())
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56

运行结果

⚡ python LCEL/runtime_lcel.py                                                                                                          1[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] [3ms] Exiting Chain run with output:
{
  "output": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel] [12ms] Exiting Chain run with output:
{
  "topic": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "topic": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [5ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Tell me a short joke about ice cream",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Tell me a short joke about ice cream"
  ]
}
[llm/end] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] [1.98s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Why did the ice cream go to school?\n\nBecause it wanted to get a little \"sundae\" education!",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Why did the ice cream go to school?\n\nBecause it wanted to get a little \"sundae\" education!",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 23,
      "prompt_tokens": 15,
      "total_tokens": 38
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 6:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 6:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "Why did the ice cream go to school?\n\nBecause it wanted to get a little \"sundae\" education!"
}
[chain/end] [1:chain:RunnableSequence] [2.01s] Exiting Chain run with output:
{
  "output": "Why did the ice cream go to school?\n\nBecause it wanted to get a little \"sundae\" education!"
}
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": ""
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] [8ms] Exiting Chain run with output:
{
  "output": "spaghetti"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel] [10ms] Exiting Chain run with output:
{
  "topic": "spaghetti"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "topic": "spaghetti"
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [5ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Tell me a short joke about spaghetti",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Tell me a short joke about spaghetti"
  ]
}
[chain/start] [1:chain:RunnableSequence > 6:parser:StrOutputParser] Entering Parser run with input:
{
  "input": ""
}
Why did the spaghetti go to the party?

Because it heard it was pasta-tively awesome![llm/end] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] [1.47s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Why did the spaghetti go to the party?\n\nBecause it heard it was pasta-tively awesome!",
        "generation_info": {
          "finish_reason": "stop"
        },
        "type": "ChatGenerationChunk",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "example": false,
            "content": "Why did the spaghetti go to the party?\n\nBecause it heard it was pasta-tively awesome!",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null
}
[chain/end] [1:chain:RunnableSequence > 6:parser:StrOutputParser] [575ms] Exiting Parser run with output:
{
  "output": "Why did the spaghetti go to the party?\n\nBecause it heard it was pasta-tively awesome!"
}
[chain/end] [1:chain:RunnableSequence] [1.56s] Exiting Chain run with output:
{
  "output": "Why did the spaghetti go to the party?\n\nBecause it heard it was pasta-tively awesome!"
}
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "spaghetti"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel] Entering Chain run with input:
{
  "input": "spaghetti"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": "spaghetti"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] [5ms] Exiting Chain run with output:
{
  "output": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] [4ms] Exiting Chain run with output:
{
  "output": "spaghetti"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel] [10ms] Exiting Chain run with output:
{
  "topic": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel] [14ms] Exiting Chain run with output:
{
  "topic": "spaghetti"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "topic": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "topic": "spaghetti"
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Tell me a short joke about ice cream",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Tell me a short joke about spaghetti",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Tell me a short joke about ice cream"
  ]
}
[llm/start] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Tell me a short joke about spaghetti"
  ]
}
[llm/end] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] [1.06s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Why did the tomato turn red?\n\nBecause it saw the spaghetti sauce!",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Why did the tomato turn red?\n\nBecause it saw the spaghetti sauce!",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 14,
      "prompt_tokens": 14,
      "total_tokens": 28
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369

1.2 不用LCEL实现

传统方法通过定义字符串,用if else 调用不同的model实现。

def invoke_configurable_chain(
    topic: str, 
    *, 
    model: str = "chat_openai"
) -> str:
    if model == "chat_openai":
        return invoke_chain(topic)
    elif model == "openai":
        return invoke_llm_chain(topic)
    elif model == "anthropic":
        return invoke_anthropic_chain(topic)
    else:
        raise ValueError(
            f"Received invalid model '{model}'."
            " Expected one of chat_openai, openai, anthropic"
        )

def stream_configurable_chain(
    topic: str, 
    *, 
    model: str = "chat_openai"
) -> Iterator[str]:
    if model == "chat_openai":
        return stream_chain(topic)
    elif model == "openai":
        # Note we haven't implemented this yet.
        return stream_llm_chain(topic)
    elif model == "anthropic":
        # Note we haven't implemented this yet
        return stream_anthropic_chain(topic)
    else:
        raise ValueError(
            f"Received invalid model '{model}'."
            " Expected one of chat_openai, openai, anthropic"
        )

def batch_configurable_chain(
    topics: List[str], 
    *, 
    model: str = "chat_openai"
) -> List[str]:
    # You get the idea
    ...

async def abatch_configurable_chain(
    topics: List[str], 
    *, 
    model: str = "chat_openai"
) -> List[str]:
    ...

invoke_configurable_chain("ice cream", model="openai")
stream = stream_configurable_chain(
    "ice_cream", 
    model="anthropic"
)
for chunk in stream:
    print(chunk, end="", flush=True)

# batch_configurable_chain(["ice cream", "spaghetti", "dumplings"])
# await ainvoke_configurable_chain("ice cream")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/why

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/362657
推荐阅读
相关标签
  

闽ICP备14008679号