By the end of this page you’ll have LangSight running locally and capturing
every tool call your AI agent makes — with full traces visible in both the
CLI and the dashboard.
Your Postgres volume was initialised with a different password than your current .env.
This happens when .env was deleted or regenerated while volumes still existed.
./scripts/quickstart.sh --reset
--reset wipes both the volumes and .env so they’re always in sync.
Script times out after ~4 minutes
On a slow machine or first run the cold-start chain (postgres → clickhouse → api → dashboard)
can take up to 3–4 minutes. If it times out, the script shows which service is still starting
and its last 20 log lines.Check logs directly:
docker compose logs api --tail=50docker compose logs clickhouse --tail=50
Port already in use
If ports 3003, 8000, 5432, or 8123 are taken by another service, the script warns you before starting.Find what’s using a port:
Add 2 lines to your existing agent code. LangSight traces every LLM call,
every MCP tool call, and every agent handoff — automatically.
import langsightlangsight.auto_patch() # LLM + MCP + handoffs — all automaticquestion = "How many orders were placed today?"# Wrap your agent logic in a session contextasync with langsight.session( agent_name="my-agent", input=question, # records the initial human prompt) as sess: # MCP calls: auto-traced — no wrap() needed result = await mcp_session.call_tool("query", {"sql": "SELECT 1"}) # ↑ traced automatically sess.set_output(result) # records the final agent response
Set environment variables before running (all three values are in your .env):
export LANGSIGHT_URL=http://localhost:8000export LANGSIGHT_API_KEY=ls_your_key # grep LANGSIGHT_API_KEYS .envexport LANGSIGHT_PROJECT_ID=your-project-id # copy from dashboard Settings > Project ID
auto_patch() patches mcp.ClientSession.call_tool, all OpenAI/Anthropic/
Gemini SDK classes, and enables handoff auto-detection — all at once.
If LangSight is unreachable, your agent works normally — nothing breaks.
If you need explicit control over which MCP sessions are traced (e.g. different
redact_payloads per server), use wrap() directly:
import langsightlangsight.auto_patch() # zero config — patches LangGraph automatically# No callbacks needed — auto_patch() injects tracing into# graph.stream(), graph.invoke(), and all LLM calls.graph = builder.compile()result = graph.invoke({"task": "Write a report on EU inflation"})
See LangGraph Integration for topology capture,
loop detection, and budget enforcement.
Open http://localhost:3003 and navigate to Sessions.You’ll see a full call tree with timing, status, payloads, and cost
attribution for every tool call your agent made.
Prevent runaway loops and budget overruns with a few extra parameters:
client = LangSightClient( url="http://localhost:8000", loop_detection=True, # stop if agent loops the same call 3x max_steps=25, # hard stop at 25 tool calls per session circuit_breaker=True, # auto-disable servers after 5 consecutive failures)