Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt

Use this file to discover all available pages before exploring further.

A single callback handler covers LangChain and every framework built on top of it — including Langflow, LangGraph, and LangServe. Since v0.4, the callback has two modes:
ModeWhen to useHow it works
Auto-detectMulti-agent LangGraph workflowsOmit server_name — agents, parent links, and prompts are detected automatically
FixedSingle-agent LangChain toolsPass server_name and agent_name explicitly

Installation

# LangGraph users (lightweight — langchain-core comes with langgraph)
pip install langsight langgraph

# LangChain users (full package)
pip install langsight langchain
langchain-core is sufficient — it ships BaseCallbackHandler and is installed automatically with langgraph. The full langchain package is only needed if you use langchain directly.
Omit server_name to enable auto-detection. The callback automatically:
  • Detects agent names from LangGraph graph names via on_chain_start
  • Builds the parent-child span tree across agent handoffs
  • Captures the first human message as the session prompt
  • Links sub-agent spans to their parent tool call via a thread-local stack
import langsight
from langsight.sdk import LangSightClient
from langsight.integrations.langchain import LangSightLangChainCallback

client = LangSightClient(
    url="http://localhost:8000",
    project_id="my-project",
)

# No server_name → auto-detect mode
# session_id is generated and managed by langsight.session()
async with langsight.session(agent_name="supervisor") as session_id:
    cb = LangSightLangChainCallback(
        client=client,
        session_id=session_id,
    )

    # Pass to ANY agent — works across supervisor + sub-agents
    result = await supervisor.ainvoke(
        {"input": "Analyze Q4 sales"},
        config={"callbacks": [cb]},
    )

What auto-detect captures

For a multi-agent LangGraph workflow like supervisor -> call_analyst -> analyst -> read_query, auto-detect produces this span tree:
Session: sess-abc123
├── supervisor (agent span)
│   ├── call_analyst                      120ms  success
│   │   └── analyst (agent span)
│   │       └── read_query                 42ms  success
│   └── summarize                          15ms  success
Each named graph in your LangGraph workflow becomes an agent span (span_type="agent"). Tool calls are nested under their enclosing agent. Cross-ainvoke calls are linked via a thread-local tool stack that works across separate callback instances.
Auto-detect filters out framework-internal names like RunnableSequence, ChannelWrite, ChatOpenAI, etc. Only user-defined graph/agent names become agent spans.

Fixed mode (backward-compatible)

Pass server_name to use fixed mode — the v0.3 behavior where every tool span gets the same server and agent name.
callback = LangSightLangChainCallback(
    client=client,
    server_name="my-tools",        # shown in langsight sessions
    agent_name="my-agent",         # optional
    session_id="sess-abc123",      # optional — groups calls into a session
)

Sending data to a project

Every span must be tagged with a project_id so it appears in the right project dashboard. Get your project ID from Settings -> Projects (click the ID pill to copy), then pass it to the client:
import os
from langsight.sdk import LangSightClient

client = LangSightClient(
    url=os.getenv("LANGSIGHT_URL", "http://localhost:8000"),
    api_key=os.getenv("LANGSIGHT_API_KEY"),
    project_id=os.getenv("LANGSIGHT_PROJECT_ID"),  # copy from Settings -> Projects
)
Or fetch your project ID via the API:
curl http://localhost:8000/api/projects \
  -H "X-API-Key: your-api-key" | python3 -m json.tool
Add to your .env:
LANGSIGHT_URL=http://localhost:8000
LANGSIGHT_API_KEY=ls_your_key_here
LANGSIGHT_PROJECT_ID=abc123def456...
The API key authenticates you globally. The project_id is what tags every span and routes it to the right project. Without it, spans are stored with no project and won’t appear in any project dashboard.

Prompt capture

The callback captures the user’s prompt and the agent’s final answer, shown in the session detail page.

Auto-capture (default in auto-detect mode)

on_chat_model_start automatically captures the first human message in the conversation as the session prompt. No code changes needed.

Explicit capture

Override auto-capture or use in fixed mode:
cb.set_input("What were Q4 sales for EMEA?")

result = await agent.ainvoke(input, config={"callbacks": [cb]})

cb.set_output(result["output"])
set_input() overrides any auto-captured prompt. set_output() stores the agent’s final answer. Both are displayed in the session detail view in the dashboard.

LangChain agents

from langchain.agents import initialize_agent, AgentType

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    callbacks=[callback],          # one line
)

agent.run("What is the weather in Berlin?")

LangGraph

from langgraph.graph import StateGraph

graph = StateGraph(...)
# ... define your graph ...

# Pass callback in config
graph.invoke(
    {"input": "your task"},
    config={"callbacks": [callback]},   # one line
)

Langflow

In Langflow, add the callback to any component that uses tools:
  1. Open your flow in the Langflow UI
  2. Click on the Agent or Tool component
  3. Under Advanced Settings, add to the callbacks list:
from langsight.integrations.langchain import LangSightLangChainCallback
from langsight.sdk import LangSightClient

callback = LangSightLangChainCallback(
    client=LangSightClient(
        url=os.getenv("LANGSIGHT_URL", "http://localhost:8000"),
        project_id=os.getenv("LANGSIGHT_PROJECT_ID"),
        redact_payloads=True,
    ),
    agent_name="langflow-agent",
)

Cross-ainvoke parent linking

When a supervisor agent calls a tool (e.g., call_analyst) that internally invokes a sub-agent via ainvoke(), the callback automatically links the sub-agent’s spans to the parent tool call. This works via a module-level thread-local stack shared across all callback instances.
# In supervisor graph — "call_analyst" tool defined as:
async def call_analyst(query: str) -> str:
    return await analyst_graph.ainvoke(
        {"input": query},
        config={"callbacks": [cb]},  # same callback instance
    )
Result in the dashboard:
supervisor
  └── call_analyst          120ms
        └── analyst (agent)
              └── read_query  42ms
No manual parent_span_id wiring needed. The thread-local stack handles it.

LangSightLangGraphCallback alias

LangSightLangGraphCallback is now an alias for LangSightLangChainCallback. Both imports work:
# Canonical import (recommended)
from langsight.integrations.langchain import LangSightLangChainCallback

# Backward-compatible import (still works)
from langsight.integrations.langgraph import LangSightLangGraphCallback
Both point to the same unified callback class. The separate LangGraph callback from v0.3 has been merged.

What gets traced

FieldAuto-detect modeFixed mode
tool_nameTool function nameTool function name
server_nameAgent name (auto-detected)Value from constructor
agent_nameAuto-detected from graph namesValue from constructor
span_type"agent" for agents, "tool_call" for tools"tool_call"
parent_span_idAuto-linked via chain hierarchy + tool stackManual or None
input_strTool input argumentsTool input arguments
outputTool resultTool result
session_inputFirst human message (auto) or set_input()set_input() only
session_outputset_output()set_output()
latency_msAuto-computedAuto-computed
statussuccess, error, or timeoutsuccess, error, or timeout

View traces

langsight sessions

Agent Sessions  (last 24h)
────────────────────────────────────────────────────────────
Session          Agent           Calls   Failed   Duration
sess-abc123      supervisor      8       1        3.4s
langsight sessions --id sess-abc123

Trace: sess-abc123  (supervisor)
├── supervisor (agent)
   ├── call_analyst                      120ms  success
   └── analyst (agent)
       └── read_query                 42ms  success
   └── summarize                          15ms  success

MCP servers in LangChain

If your LangChain agent uses MCP servers as tools, LangSight gives you action-layer visibility:
  • Tool call traces via this callback
  • MCP server health via langsight mcp-health
  • Security scanning via langsight security-scan
No extra setup needed — the callback + CLI work independently and complement each other.