Langchain核心组件——Agent

面经停更,先更大模型,内容主要来源是langchain官方文档

使用langchain集成deepseek记得安装langchain-deepseek

模型

静态模型

from langchain.agents import create_agent
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="gpt-5.4",
    temperature=0.1,
    max_tokens=1000,
    timeout=30
    # ... (other params)
)
agent = create_agent(model, tools=tools)

动态模型

from langchain.agents import create_agent  
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse  
from langchain_openai import ChatOpenAI, tools  
  
# 1. 设置 API 密钥(建议通过环境变量或直接在实例中传入)  
DEEPSEEK_API_KEY = "sk-xxx"  # 请替换为你真实的 DeepSeek API Key  
basic_model=ChatOpenAI(model="deepseek-v4-flash")  
pro_model = ChatOpenAI(model="deepseek-v4-pro")  
@wrap_model_call  
def dynamic_select_model(request:ModelRequest,handler)->ModelResponse:  
    message_count = request.state["messages"]  
    if message_count < 10 :  
        model = basic_model  
    else:  
        model = pro_model  
    return handler(request.override(model=model))  
  
agent = create_agent(  
    model=basic_model,  
    middleware=[dynamic_select_model]  
)

@wrap_model_call装饰器可以理解为给大模型(LLM)的调用过程套上了一层“保护壳”或“拦截器”。通过它,你可以完全掌控模型调用的整个生命周期

@wrap_model_call 装饰的函数通常包含两个核心参数:

  1. request:包含了本次模型调用的全部信息(如提示词、模型实例、可用工具等)。
  2. handler:这是一个回调函数。只有当你主动调用 handler(request) 时,才会真正去请求大模型;如果你不调用它,模型的调用就会被直接跳过。

实战示例(Examples): 1. 基础的重试逻辑

@wrap_model_call
def retry_on_error(request, handler):
    max_retries = 3  # 最大重试3次
    for attempt in range(max_retries):
        try:
            return handler(request)  # 尝试执行模型调用
        except Exception:
            # 如果是最后一次尝试依然报错,则抛出异常
            if attempt == max_retries - 1:
                raise

2. 模型故障自动降级(Fallback)

@wrap_model_call
def fallback_model(request, handler):
    # 先尝试使用主模型
    try:
        return handler(request)
    except Exception:
        pass  # 如果主模型报错,忽略异常继续往下走

    # 切换到备用模型继续尝试
    request = request.override(model=fallback_model_instance)
    return handler(request)

request.override() 是 LangChain 框架中用于安全、规范地修改模型调用请求的核心方法。

3. 重写响应内容(返回完整的 ModelResponse

@wrap_model_call
def uppercase_responses(request, handler):
    response = handler(request)
    ai_msg = response.result[0]
    # 将 AI 返回的内容全部转为大写后重新封装返回
    return ModelResponse(
        result=[AIMessage(content=ai_msg.content.upper())],
        structured_response=response.structured_response,
    )

4. 简单返回 AIMessage(框架会自动转换)

@wrap_model_call
def simple_response(request, handler):
    # 直接返回 AIMessage 也可以,框架会自动把它转换成 ModelResponse
    return AIMessage(content="Simple response")

Agent的中间件(最主要的是wrap_model_call)

  1. before_model(模型调用前)
    请求刚发出,还没到达大模型。在这里你可以拦截并修改发给模型的提示词(System Prompt)或对话历史。
  2. wrap_model_call(模型调用包裹层)
    它像一个“保护壳”,包裹着整个真实的模型调用过程。你可以在这里实现自动重试、模型降级、缓存检查等绝对控制逻辑。
  3. after_model(模型调用后)
    模型返回了结果,但还没交给下一步。你可以在这里检查模型的输出是否合规,或者对输出进行格式化处理。
  4. before_tool / after_tool(工具执行前后)
    如果模型决定调用工具,在工具实际执行的前后,还会分别触发这两个生命周期钩子,用于执行前的安全检查或执行后的结果记录。

工具 tool

工具赋予代理采取行动的能力。一般是我们通过代码实现的API、函数等等

静态工具

静态工具在创建代理时定义,并在整个执行过程中保持不变。这是最常见和最直接的方法。

要定义一个带有静态工具的代理,请将工具列表传递给代理。

from langchain.tools import tool
from langchain.agents import create_agent


@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

@tool
def get_weather(location: str) -> str:
    """Get weather information for a location."""
    return f"Weather in {location}: Sunny, 72°F"

agent = create_agent(model=model, tools=[search, get_weather])

动态工具⭐

使用动态工具,Agent可用的工具集在运行时修改,而不是预先全部定义。

过滤预注册工具

creat_agent时工具已经创建,工具是静态的,但他们的可用性动态的

基于State:State → 单次会话内的临时状态(对话历史、当前上下文),agent 运行中写入

from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable

@wrap_model_call
def state_based_tools(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Filter tools based on conversation State."""
    # Read from State: check if user has authenticated
    state = request.state
    is_authenticated = state.get("authenticated", False)
    message_count = len(state["messages"])

    # Only enable sensitive tools after authentication
    if not is_authenticated:
        tools = [t for t in request.tools if t.name.startswith("public_")]
        request = request.override(tools=tools)
    elif message_count < 5:
        # Limit tools early in conversation
        tools = [t for t in request.tools if t.name != "advanced_search"]
        request = request.override(tools=tools)

    return handler(request)

agent = create_agent(
    model="gpt-5.4",
    tools=[public_search, private_search, advanced_search],
    middleware=[state_based_tools]
)

基于Store:Store → 跨会话的持久数据(用户偏好、权限配置、Feature Flags),外部预先写入

@dataclass 是 Python 的一个装饰器,用来自动生成类的样板代码。类似@Data,包括__init__(构造器)、__repr__(方便打印,类似toString) 、__eq__(方便进行值匹配)

from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable
from langgraph.store.memory import InMemoryStore

@dataclass
class Context:
    user_id: str

@wrap_model_call
def store_based_tools(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Filter tools based on Store preferences."""
    user_id = request.runtime.context.user_id

    # Read from Store: get user's enabled features
    store = request.runtime.store
    feature_flags = store.get(("features",), user_id)

    if feature_flags:
        enabled_features = feature_flags.value.get("enabled_tools", [])
        # Only include tools that are enabled for this user
        tools = [t for t in request.tools if t.name in enabled_features]
        request = request.override(tools=tools)

    return handler(request)

agent = create_agent(
    model="gpt-5.4",
    tools=[search_tool, analysis_tool, export_tool],
    middleware=[store_based_tools],
    context_schema=Context,
    store=InMemoryStore()
)

基于运行时上下文:单次请求注入(用户角色、请求级权限),调用方在 invoke 时传入

from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable

@dataclass
class Context:
    user_role: str

@wrap_model_call
def context_based_tools(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Filter tools based on Runtime Context permissions."""
    # Read from Runtime Context: get user role
    if request.runtime is None or request.runtime.context is None:
        # If no context provided, default to viewer (most restrictive)
        user_role = "viewer"
    else:
        user_role = request.runtime.context.user_role

    if user_role == "admin":
        # Admins get all tools
        pass
    elif user_role == "editor":
        # Editors can't delete
        tools = [t for t in request.tools if t.name != "delete_data"]
        request = request.override(tools=tools)
    else:
        # Viewers get read-only tools
        tools = [t for t in request.tools if t.name.startswith("read_")]
        request = request.override(tools=tools)

    return handler(request)

agent = create_agent(
    model="gpt-5.4",
    tools=[read_data, write_data, delete_data],
    middleware=[context_based_tools],
    context_schema=Context
)

state、store、runtime的区别

三者同时存在时

用户发来一条消息
        ↓
Runtime Context:他是 VIP 吗?        ← 这次请求进来时就知道
        ↓
Store:他开通了哪些高级功能?          ← 数据库里查
        ↓
State:他这次对话里完成验证了吗?      ← 本次对话里产生的
        ↓
三个条件都满足 → 给 checkout 工具

运行时工具注册

create_agent 时工具池是空的或不完整的 工具本身在运行时才产生/发现,通常使用于MCP场景

wrap_model_call - 将动态工具添加到请求中

wrap_tool_call - 处理动态添加的工具的执行 MCP_server.py

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def calculate_tip(bill_amount: float, tip_percentage: float = 20.0) -> str:
    """Calculate tip for a bill."""
    tip = bill_amount * (tip_percentage / 100)
    return f"Tip: ${tip:.2f}, Total: ${bill_amount + tip:.2f}"

if __name__ == "__main__":
    mcp.run(transport="stdio")
from langchain.tools import tool
from langchain.agents import create_agent
from langchain.agents.middleware import AgentMiddleware

@tool
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"{city} weather is good"


class DynamicToolMiddleware(AgentMiddleware):

    def __init__(self, mcp_tools: list):
        self.mcp_tools = mcp_tools  # 运行时从 MCP server 拿到的工具

    def wrap_model_call(self, request, handler):
        # 把 MCP 工具动态追加进去
        updated = request.override(tools=[*request.tools, *self.mcp_tools])
        return handler(updated)

    def wrap_tool_call(self, request, handler):
        # 找到对应的 MCP 工具并执行
        mcp_tool = next(
            (t for t in self.mcp_tools if t.name == request.tool_call["name"]),
            None
        )
        if mcp_tool:
            return handler(request.override(tool=mcp_tool))
        return handler(request)
agent = create_agent(
                model="gpt-4o",
                tools=[get_weather],         # 静态工具
                middleware=[DynamicToolMiddleware(mcp_tools)],  # MCP 工具动态注入
            )

为什么需要两个hook?

wrap_model_call → 告诉模型"你有这个工具"

wrap_tool_call → 告诉框架"这个工具怎么执行"

工具错误处理

使用@wrap_tool_call创建中间件

from langchain.agents import create_agent
from langchain.agents.middleware import wrap_tool_call
from langchain.messages import ToolMessage


@wrap_tool_call
def handle_tool_errors(request, handler):
    """Handle tool execution errors with custom messages."""
    try:
        return handler(request)
    except Exception as e:
        # Return a custom error message to the model
        return ToolMessage(
            content=f"Tool error: Please check your input and try again. ({str(e)})",
            tool_call_id=request.tool_call["id"]
        )

agent = create_agent(
    model="gpt-5.4",
    tools=[search, get_weather],
    middleware=[handle_tool_errors]
)

系统提示词

系统提示词是在对话开始前给模型的背景指令,告诉模型"你是谁、你该怎么做"。

agent = create_agent(
    model,
    tools,
    system_prompt="You are a helpful assistant. Be concise and accurate."
)

当没有提供 system_prompt 时,代理会从消息中直接推断出任务。

system_prompt 参数接受一个 str 或一个 SystemMessage。使用 SystemMessage 会更灵活地控制提示结构,这对于提供特定于提供商的功能(例如 Anthropic 的提示缓存 )非常有用。

from langchain.agents import create_agent
from langchain.messages import SystemMessage, HumanMessage

literary_agent = create_agent(
    model="google_genai:gemini-3.1-pro-preview",
    system_prompt=SystemMessage(
        content=[
            {
                "type": "text",
                "text": "You are an AI assistant tasked with analyzing literary works.",
            },
            {
                "type": "text",
                "text": "<the entire contents of 'Pride and Prejudice'>",
                "cache_control": {"type": "ephemeral"}
            }
        ]
    )
)

result = literary_agent.invoke(
    {"messages": [HumanMessage("Analyze the major themes in 'Pride and Prejudice'.")]}
)

cache_control 字段中的 {"type": "ephemeral"} 会告诉 Anthropic 缓存该内容块,从而减少使用相同系统提示的重复请求的延迟和成本。

动态提示词

@dynamic_prompt 装饰器创建了一个中间件,可以根据模型请求生成系统提示

from typing import TypedDict

from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest

@data_class
class Context(TypedDict):
    user_role: str

@dynamic_prompt
def user_role_prompt(request: ModelRequest) -> str:
    """Generate system prompt based on user role."""
    user_role = request.runtime.context.get("user_role", "user")
    base_prompt = "You are a helpful assistant."

    if user_role == "expert":
        return f"{base_prompt} Provide detailed technical responses."
    elif user_role == "beginner":
        return f"{base_prompt} Explain concepts simply and avoid jargon."

    return base_prompt

agent = create_agent(
    model="gpt-5.4",
    tools=[web_search],
    middleware=[user_role_prompt],
    context_schema=Context
)

# The system prompt will be set dynamically based on context
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Explain machine learning"}]},
    context={"user_role": "expert"}
)

Name

为代理设置一个可选的 name。在作为子图的代理加入 多代理系统 时,此名称用作节点标识符:

agent = create_agent(
    model,
    tools,
    name="research_assistant"
)

Invocation  调用

运行 agent,传入用户消息,等待最终结果返回。

result = agent.invoke(
    {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
)

invoke —— 等待最终结果

result = agent.invoke({
    "messages": [HumanMessage(content="What's the weather in San Francisco?")]
})
print(result["messages"][-1].content)  # 直接拿最后一条消息

stream —— 逐步返回中间过程

for chunk in agent.stream({
    "messages": [HumanMessage(content="Search for AI news and summarize")]
}, stream_mode="values"):
    latest = chunk["messages"][-1]
    if latest.content:
        print(latest.content)
    elif latest.tool_calls:
        print(f"调用工具: {[tc['name'] for tc in latest.tool_calls]}")

Advanced concepts  高级概念

结构化输出(返回固定的JSON格式等等)

工具策略结构化输出

import asyncio
from pydantic import BaseModel
from langchain.tools import tool
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from langchain.messages import HumanMessage

@tool
def search(query: str) -> str:
    """Search for information."""
    return f"John Doe, john@example.com,  123-4567"

class ContactInfo(BaseModel):
    name: str
    email: str
    phone: str

agent = create_agent(
    model="gpt-4o",
    tools=[search],
    response_format=ToolStrategy(ContactInfo)
)

async def main():
    result = await agent.ainvoke({
        "messages": [HumanMessage(content="Search for John's contact info")]
    })
    contact = result["structured_response"]
    print(contact)          # ContactInfo(name='John Doe', ...)
    print(contact.name)     # John Doe
    print(contact.email)    # john@example.com

asyncio.run(main())

提供者策略结构化输出

在某些情况下,希望Agent返回特定格式的输出,structured_response 直接就是 Pydantic 对象,可以用 .name.email 这样访问字段。

from langchain.agents.structured_output import ProviderStrategy

agent = create_agent(
    model="gpt-4o",
    tools=[search],
    response_format=ProviderStrategy(ContactInfo)
)
result = await agent.ainvoke({...})
result["messages"][-1].content # 自由文本回答
result["structured_response"] # 结构化数据 ← 这里
ToolStrategy ProviderStrategy
原理 调用工具解析参数,得到结构化输出 用模型直接输出原生 JSON mode
兼容性 所有支持工具调用的模型 只有支持原生结构化输出的模型
可靠性 稳定 更可靠但依赖模型支持

/TODO 阅读ProviderStrategy源码

Memory 记忆

短期记忆

agent 默认自动维护,就是 messages 列表,对话过程中自动追加

通过中间件实现

import asyncio
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import AgentMiddleware
from langchain.tools import tool
from langchain.messages import HumanMessage
from typing import Any

class CustomState(AgentState):
    user_preferences: dict

@tool
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"{city} weather is good"

class PreferencesMiddleware(AgentMiddleware):
    state_schema = CustomState  # ← State 定义在 Middleware 上

    def before_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
        # 可以在调用模型前读取或修改 state
        prefs = state.get("user_preferences", {})
        print(f"当前用户偏好: {prefs}")

agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    middleware=[PreferencesMiddleware()]  # ← State 跟着 Middleware 一起注册
)

async def main():
    result = await agent.ainvoke({
        "messages": [HumanMessage("What's the weather in Tokyo?")],
        "user_preferences": {"style": "technical"}
    })
    print(result["messages"][-1].content)

asyncio.run(main())

通过 state_schema(简便写法)

agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    state_schema=CustomState  # ← 直接传,不经过 Middleware
)

区别:

Middleware 方式 → 一个抽屉,相关的东西都放一起

state_schema 方式 → 东西散落在桌子各处,但都能用

长期记忆

长期记忆 配合Store实现

import asyncio
from langgraph.store.memory import InMemoryStore
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import AgentMiddleware
from langchain.tools import tool
from langchain.messages import HumanMessage
from dataclasses import dataclass
from typing import Any

@dataclass
class Context:
    user_id: str

@tool
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"{city} weather is good"

class LongTermMemoryMiddleware(AgentMiddleware):

    def before_model(self, state, runtime) -> dict[str, Any] | None:
        user_id = runtime.context.user_id
        store = runtime.store

        # 从 Store 读取历史记忆
        memory = store.get(("memory",), user_id)
        if memory:
            print(f"历史记忆: {memory.value}")

    def after_model(self, state, runtime) -> dict[str, Any] | None:
        user_id = runtime.context.user_id
        store = runtime.store

        # 对话结束后把重要信息写入 Store
        last_message = state["messages"][-1]
        store.put(("memory",), user_id, {"last_topic": last_message.content})

store = InMemoryStore()

agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    middleware=[LongTermMemoryMiddleware()],
    context_schema=Context,
    store=store
)

async def main():
    # 第一次对话
    await agent.ainvoke(
        {"messages": [HumanMessage("My favorite city is Tokyo")]},
        context={"user_id": "user_001"}
    )

    # 第二次对话(新会话,但能记住上次的内容)
    result = await agent.ainvoke(
        {"messages": [HumanMessage("What city did I mention before?")]},
        context={"user_id": "user_001"}
    )
    print(result["messages"][-1].content)

asyncio.run(main())
#Agent##langchain#
全部评论
点赞 回复 分享
发布于 今天 17:13 河南

相关推荐

评论
点赞
收藏
分享

创作者周榜

更多
牛客网
牛客网在线编程
牛客网题解
牛客企业服务