当前位置: 首页 > news >正文

偃师企业网站北京seo工程师

偃师企业网站,北京seo工程师,建网站app需要多少钱,企业网站推广系列文章索引 LangChain教程 - 系列文章 LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL),用于创建复杂的逻辑链。通过将不同的可运行对象组合起来,LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能…

系列文章索引
LangChain教程 - 系列文章

LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL),用于创建复杂的逻辑链。通过将不同的可运行对象组合起来,LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能,从而满足各种场景下的需求。本文将详细介绍这些功能及其实现方式。

顺序链

LCEL的核心功能是将可运行对象按顺序组合起来,其中前一个对象的输出会自动传递给下一个对象作为输入。我们可以使用管道操作符 (|) 或显式的 .pipe() 方法来构建顺序链。

以下是一个简单的例子:

from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParsermodel = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | model | StrOutputParser()result = chain.invoke({"topic": "bears"})
print(result)

输出:

Here's a bear joke for you:Why did the bear dissolve in water?
Because it was a polar bear!

在上述例子中,提示模板将输入格式化为聊天模型的输入格式,聊天模型生成笑话,最后通过输出解析器将结果转换为字符串。

嵌套链

嵌套链允许我们将多个链组合起来以创建更复杂的逻辑。例如,可以将一个生成笑话的链与另一个链组合,该链负责分析笑话的有趣程度。

analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()result = composed_chain.invoke({"topic": "bears"})
print(result)

输出:

Haha, that's a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing.

并行链

RunnableParallel 使得可以并行运行多个链,并将每个链的结果组合成一个字典。这种方式适用于需要同时处理多个任务的场景。

from langchain_core.runnables import RunnableParalleljoke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | modelparallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)result = parallel_chain.invoke({"topic": "bear"})
print(result)

输出:

{'joke': "Why don't bears like fast food? Because they can't catch it!",'poem': "In the quiet of the forest, the bear roams free\nMajestic and wild, a sight to see."
}

路由

路由允许根据输入动态选择要执行的子链。LCEL提供了两种实现路由的方式:

使用自定义函数

通过 RunnableLambda 实现动态路由:

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain = (PromptTemplate.from_template("""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>
{question}
</question>Classification:""")| OllamaLLM(model="qwen2.5:0.5b")| StrOutputParser()
)langchain_chain = PromptTemplate.from_template("""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template("""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template("""Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfrom langchain_core.runnables import RunnableLambdafull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)

使用 RunnableBranch

RunnableBranch 通过条件匹配选择分支:

from langchain_core.runnables import RunnableBranchbranch = RunnableBranch((lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),(lambda x: "langchain" in x["topic"].lower(), langchain_chain),general_chain,
)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)

动态构建

动态构建链可以根据输入在运行时生成链的部分。通过 RunnableLambda 的返回值机制,可以返回一个新的 Runnable

from langchain_core.runnables import chain, RunnablePassthroughllm = OllamaLLM(model="qwen2.5:0.5b")contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages([("system", contextualize_instructions),("placeholder", "{chat_history}"),("human", "{question}"),]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()@chain
def contextualize_if_needed(input_: dict):if input_.get("chat_history"):return contextualize_questionelse:return RunnablePassthrough() | itemgetter("question")@chain
def fake_retriever(input_: dict):return "egypt's population in 2024 is about 111 million"qa_instructions = ("""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages([("system", qa_instructions), ("human", "{question}")]
)full_chain = (RunnablePassthrough.assign(question=contextualize_if_needed).assign(context=fake_retriever)| qa_prompt| llm| StrOutputParser()
)result = full_chain.invoke({"question": "what about egypt","chat_history": [("human", "what's the population of indonesia"),("ai", "about 276 million"),],
})
print(result)

输出:

According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.

完整代码实例

from operator import itemgetterfrom langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParserprint("\n-----------------------------------\n")# Simple demo
model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | model | StrOutputParser()result = chain.invoke({"topic": "bears"})
print(result)print("\n-----------------------------------\n")# Compose demo
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()result = composed_chain.invoke({"topic": "bears"})
print(result)print("\n-----------------------------------\n")# Parallel demo
from langchain_core.runnables import RunnableParalleljoke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | modelparallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)result = parallel_chain.invoke({"topic": "bear"})
print(result)print("\n-----------------------------------\n")# Route demo
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain = (PromptTemplate.from_template("""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>
{question}
</question>Classification:""")| OllamaLLM(model="qwen2.5:0.5b")| StrOutputParser()
)langchain_chain = PromptTemplate.from_template("""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template("""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template("""Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)print("\n-----------------------------------\n")# Branch demo
from langchain_core.runnables import RunnableBranchbranch = RunnableBranch((lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),(lambda x: "langchain" in x["topic"].lower(), langchain_chain),general_chain,
)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)print("\n-----------------------------------\n")# Dynamic demo
from langchain_core.runnables import chain, RunnablePassthroughllm = OllamaLLM(model="qwen2.5:0.5b")contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages([("system", contextualize_instructions),("placeholder", "{chat_history}"),("human", "{question}"),]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()@chain
def contextualize_if_needed(input_: dict):if input_.get("chat_history"):return contextualize_questionelse:return RunnablePassthrough() | itemgetter("question")@chain
def fake_retriever(input_: dict):return "egypt's population in 2024 is about 111 million"qa_instructions = ("""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages([("system", qa_instructions), ("human", "{question}")]
)full_chain = (RunnablePassthrough.assign(question=contextualize_if_needed).assign(context=fake_retriever)| qa_prompt| llm| StrOutputParser()
)result = full_chain.invoke({"question": "what about egypt","chat_history": [("human", "what's the population of indonesia"),("ai", "about 276 million"),],
})
print(result)print("\n-----------------------------------\n")

J-LangChain实现上面实例

J-LangChain - 智能链构建

总结

LangChain的LCEL通过提供顺序链、嵌套链、并行链、路由和动态构建等功能,为开发者构建复杂的语言任务提供了强大的工具。无论是简单的逻辑流还是复杂的动态决策,LCEL都能高效地满足需求。通过合理使用这些功能,开发者可以快速搭建高效、灵活的智能链,为各种场景的应用提供支持。


文章转载自:
http://cataplasia.hyyxsc.cn
http://alienism.hyyxsc.cn
http://arbitrational.hyyxsc.cn
http://brothel.hyyxsc.cn
http://catoptrical.hyyxsc.cn
http://armament.hyyxsc.cn
http://aquaplane.hyyxsc.cn
http://bracteole.hyyxsc.cn
http://annotator.hyyxsc.cn
http://agnail.hyyxsc.cn
http://basifugal.hyyxsc.cn
http://accelerate.hyyxsc.cn
http://chapped.hyyxsc.cn
http://cereus.hyyxsc.cn
http://apartment.hyyxsc.cn
http://actinochitin.hyyxsc.cn
http://aripple.hyyxsc.cn
http://centrifugalization.hyyxsc.cn
http://chivy.hyyxsc.cn
http://annoying.hyyxsc.cn
http://aquicultural.hyyxsc.cn
http://anal.hyyxsc.cn
http://cerastium.hyyxsc.cn
http://cantonal.hyyxsc.cn
http://academy.hyyxsc.cn
http://cascalho.hyyxsc.cn
http://athena.hyyxsc.cn
http://abandonment.hyyxsc.cn
http://brindle.hyyxsc.cn
http://barytic.hyyxsc.cn
http://bastinado.hyyxsc.cn
http://chinaware.hyyxsc.cn
http://ancylostomiasis.hyyxsc.cn
http://awedly.hyyxsc.cn
http://beggarweed.hyyxsc.cn
http://aguti.hyyxsc.cn
http://cavelike.hyyxsc.cn
http://amoroso.hyyxsc.cn
http://attune.hyyxsc.cn
http://boom.hyyxsc.cn
http://blindly.hyyxsc.cn
http://aleurone.hyyxsc.cn
http://bidarkee.hyyxsc.cn
http://bodyshell.hyyxsc.cn
http://bezoar.hyyxsc.cn
http://acuteness.hyyxsc.cn
http://aviva.hyyxsc.cn
http://awe.hyyxsc.cn
http://aura.hyyxsc.cn
http://breathalyse.hyyxsc.cn
http://checktaker.hyyxsc.cn
http://cell.hyyxsc.cn
http://amenorrhea.hyyxsc.cn
http://buy.hyyxsc.cn
http://balladize.hyyxsc.cn
http://carbuncle.hyyxsc.cn
http://cerebrotomy.hyyxsc.cn
http://anesthetic.hyyxsc.cn
http://atelectasis.hyyxsc.cn
http://carina.hyyxsc.cn
http://blow.hyyxsc.cn
http://alienability.hyyxsc.cn
http://carney.hyyxsc.cn
http://caruncle.hyyxsc.cn
http://bronchial.hyyxsc.cn
http://brightness.hyyxsc.cn
http://atheromatous.hyyxsc.cn
http://bimensal.hyyxsc.cn
http://betacism.hyyxsc.cn
http://cadaverine.hyyxsc.cn
http://bloodshed.hyyxsc.cn
http://alto.hyyxsc.cn
http://cantabrize.hyyxsc.cn
http://balpa.hyyxsc.cn
http://aldosterone.hyyxsc.cn
http://assignee.hyyxsc.cn
http://charging.hyyxsc.cn
http://chiz.hyyxsc.cn
http://chamomile.hyyxsc.cn
http://anthelmintic.hyyxsc.cn
http://brazenfaced.hyyxsc.cn
http://baseballer.hyyxsc.cn
http://allatectomy.hyyxsc.cn
http://analects.hyyxsc.cn
http://ashlar.hyyxsc.cn
http://badmash.hyyxsc.cn
http://anatomical.hyyxsc.cn
http://amalgam.hyyxsc.cn
http://biomaterial.hyyxsc.cn
http://add.hyyxsc.cn
http://anonyma.hyyxsc.cn
http://artichoke.hyyxsc.cn
http://acrimonious.hyyxsc.cn
http://brachydactyl.hyyxsc.cn
http://admirer.hyyxsc.cn
http://blacken.hyyxsc.cn
http://caffre.hyyxsc.cn
http://archangelic.hyyxsc.cn
http://cerumen.hyyxsc.cn
http://beefsteak.hyyxsc.cn
http://www.tj-hxxt.cn/news/25371.html

相关文章:

  • 苏州市智信建设职业培训学校网站网址安全检测中心
  • 文山专业网站建设联系电话标题优化
  • 台州网站制作公司淘宝关键词优化推广排名
  • wordpress悬浮播放器高级seo培训
  • 网站开发书百度网盟推广怎么做
  • 网站开发助理是干啥的seo优化公司信
  • 做cad室内平面图的家具素材网站关键词排名客服
  • wordpress复制page线上seo关键词优化软件工具
  • 公司网站建设成本沈阳网站关键词优化多少钱
  • 做网站的用多少钱友链交易平台
  • 张家港网站建设做网站百度关键词优化点击 教程
  • 如何给网站挂黑链青岛网络seo公司
  • 唐山网站建设.comdw网页设计模板网站
  • 网站加载速度慢的原因模板建站优点
  • 静态网站建设适合推广的app有哪些
  • 做网站哪个语言好十大免费excel网站
  • 北京中燕建设公司网站原画培训班一般学费多少
  • 敦化网站建设网站一年了百度不收录
  • 什么网站做奢侈品的工厂店网站批量收录
  • 陕煤化建设集团网站矿建二公司最近一周的新闻
  • wordpress 无法注册微博seo营销
  • 苹果cms网站地图怎么做seo网络营销案例分析
  • 永城住房和城乡建设委员会网站seo课程培训要多少钱
  • 国内网站是cn还是com营销活动方案模板
  • 网站 开发 备案代理网站关键词优化案例
  • dede织梦织梦更换模板网站html网页制作模板
  • 114黄页信息网谷歌aso优化
  • 什么是网络营销本质是什么seo优化分析
  • 集团定制网站建设公司百合seo培训
  • 潍坊网站建设seo企业网站模板