A single callback handler covers LangChain and every framework built on top of it — including Langflow, LangGraph, and LangServe. Since v0.4, the callback has two modes:Documentation Index
Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt
Use this file to discover all available pages before exploring further.
| Mode | When to use | How it works |
|---|---|---|
| Auto-detect | Multi-agent LangGraph workflows | Omit server_name — agents, parent links, and prompts are detected automatically |
| Fixed | Single-agent LangChain tools | Pass server_name and agent_name explicitly |
Installation
langchain-core is sufficient — it ships BaseCallbackHandler and is installed automatically with langgraph. The full langchain package is only needed if you use langchain directly.Auto-detect mode (recommended for LangGraph)
Omitserver_name to enable auto-detection. The callback automatically:
- Detects agent names from LangGraph graph names via
on_chain_start - Builds the parent-child span tree across agent handoffs
- Captures the first human message as the session prompt
- Links sub-agent spans to their parent tool call via a thread-local stack
What auto-detect captures
For a multi-agent LangGraph workflow likesupervisor -> call_analyst -> analyst -> read_query, auto-detect produces this span tree:
span_type="agent"). Tool calls are nested under their enclosing agent. Cross-ainvoke calls are linked via a thread-local tool stack that works across separate callback instances.
Fixed mode (backward-compatible)
Passserver_name to use fixed mode — the v0.3 behavior where every tool span gets the same server and agent name.
Sending data to a project
Every span must be tagged with aproject_id so it appears in the right project dashboard. Get your project ID from Settings -> Projects (click the ID pill to copy), then pass it to the client:
.env:
The API key authenticates you globally. The
project_id is what tags every span and routes it to the right project. Without it, spans are stored with no project and won’t appear in any project dashboard.Prompt capture
The callback captures the user’s prompt and the agent’s final answer, shown in the session detail page.Auto-capture (default in auto-detect mode)
on_chat_model_start automatically captures the first human message in the conversation as the session prompt. No code changes needed.
Explicit capture
Override auto-capture or use in fixed mode:set_input() overrides any auto-captured prompt. set_output() stores the agent’s final answer. Both are displayed in the session detail view in the dashboard.
LangChain agents
LangGraph
Langflow
In Langflow, add the callback to any component that uses tools:- Open your flow in the Langflow UI
- Click on the Agent or Tool component
- Under Advanced Settings, add to the callbacks list:
Cross-ainvoke parent linking
When a supervisor agent calls a tool (e.g.,call_analyst) that internally invokes a sub-agent via ainvoke(), the callback automatically links the sub-agent’s spans to the parent tool call. This works via a module-level thread-local stack shared across all callback instances.
parent_span_id wiring needed. The thread-local stack handles it.
LangSightLangGraphCallback alias
LangSightLangGraphCallback is now an alias for LangSightLangChainCallback. Both imports work:
What gets traced
| Field | Auto-detect mode | Fixed mode |
|---|---|---|
tool_name | Tool function name | Tool function name |
server_name | Agent name (auto-detected) | Value from constructor |
agent_name | Auto-detected from graph names | Value from constructor |
span_type | "agent" for agents, "tool_call" for tools | "tool_call" |
parent_span_id | Auto-linked via chain hierarchy + tool stack | Manual or None |
input_str | Tool input arguments | Tool input arguments |
output | Tool result | Tool result |
session_input | First human message (auto) or set_input() | set_input() only |
session_output | set_output() | set_output() |
latency_ms | Auto-computed | Auto-computed |
status | success, error, or timeout | success, error, or timeout |
View traces
MCP servers in LangChain
If your LangChain agent uses MCP servers as tools, LangSight gives you action-layer visibility:- Tool call traces via this callback
- MCP server health via
langsight mcp-health - Security scanning via
langsight security-scan