Skip to main content
Since v0.4, LangSightLangGraphCallback is an alias for LangSightLangChainCallback. The unified callback handles both LangChain and LangGraph automatically with auto-detect mode. This page covers the LangGraph-specific setup. For the full reference, see the LangChain integration.

Install

pip install langsight langgraph
langchain-core ships with langgraph — you don’t need the full langchain package. LangSight’s callback will find BaseCallbackHandler in langchain-core automatically.

Setup (auto-detect mode)

The recommended setup for LangGraph omits server_name, enabling auto-detect mode. Agent names, parent links, and prompts are detected automatically from graph names.
import os
import langsight
from langsight.sdk import LangSightClient
from langsight.integrations.langgraph import LangSightLangGraphCallback

client = LangSightClient(
    url=os.getenv("LANGSIGHT_URL", "http://localhost:8000"),
    api_key=os.getenv("LANGSIGHT_API_KEY"),
    project_id=os.getenv("LANGSIGHT_PROJECT_ID"),
)

# session_id is generated and managed by langsight.session()
async with langsight.session(agent_name="my-graph") as session_id:
    callback = LangSightLangGraphCallback(
        client=client,
        session_id=session_id,
    )

Setup (fixed mode)

If you don’t need auto-detection, pass server_name for v0.3-compatible behavior:
callback = LangSightLangGraphCallback(
    client=client,
    agent_name="my-graph",
    server_name="my-tools",
    session_id="sess-001",
    trace_id="trace-abc",
)

Usage

Pass as a callback to your LangGraph invocation:
result = await graph.ainvoke(
    {"input": "Analyze sales data for Q4"},
    config={"callbacks": [callback]},
)
Works with synchronous graph.invoke() too.

Multi-agent workflows

In auto-detect mode, each named graph in your LangGraph workflow becomes an agent span. Cross-ainvoke parent linking happens automatically via a thread-local tool stack:
# Supervisor graph calls "call_analyst" tool, which invokes analyst graph
result = await supervisor.ainvoke(
    {"input": "Analyze Q4 sales"},
    config={"callbacks": [callback]},
)
Result in the dashboard:
Session: sess-001
├── supervisor (agent)
│   ├── call_analyst                 120ms  success
│   │   └── analyst (agent)
│   │       └── read_query            42ms  success
│   └── summarize                     15ms  success

Migration from v0.3

The import path is unchanged — no code changes needed:
# This still works (backward-compatible alias)
from langsight.integrations.langgraph import LangSightLangGraphCallback

# Or use the canonical import
from langsight.integrations.langchain import LangSightLangChainCallback
The only behavioral change: if you omit server_name, the callback now enters auto-detect mode instead of defaulting to "langgraph". To preserve v0.3 behavior exactly, pass server_name explicitly.

What gets traced

FieldAuto-detect modeFixed mode
tool_nameTool function nameTool function name
server_nameAgent name (auto-detected)Value from constructor
agent_nameAuto-detected from graph namesValue from constructor
span_type"agent" for agents, "tool_call" for tools"tool_call"
parent_span_idAuto-linkedManual or None
latency_msAuto-computedAuto-computed
statussuccess, error, or timeoutsuccess, error, or timeout

View traces

langsight sessions --id sess-001
Or open http://localhost:3003/sessions/sess-001.