Documentation Index
Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt
Use this file to discover all available pages before exploring further.
Requirements
| Requirement | Version |
|---|
| Python | 3.11+ |
| CrewAI | 1.5+ (event bus integration; older versions use a fallback) |
| LangSight server | Running at LANGSIGHT_URL (default http://localhost:8000) |
Installation
Using uv (recommended)
Or in your pyproject.toml:
[project]
dependencies = [
"crewai>=1.5",
"langsight",
]
Then:
Using pip
Quick start — full working example
One function call. No callbacks, no decorators, no config files.
#!/usr/bin/env python
import langsight
# Patches CrewAI event bus + Anthropic/OpenAI SDK automatically.
# Reads LANGSIGHT_URL, LANGSIGHT_API_KEY, LANGSIGHT_PROJECT_ID from env.
langsight.auto_patch()
from crewai import Agent, Task, Crew
analyst = Agent(
role="SQL Analyst",
goal="Write accurate SQL queries",
backstory="You are an expert SQL analyst.",
)
task = Task(
description="Count the number of orders in Q4 2024",
expected_output="A SQL query and the result",
agent=analyst,
)
crew = Crew(agents=[analyst], tasks=[task])
result = crew.kickoff()
# Done. Session grouping, agent attribution, and flush happen automatically.
Environment variables
Create a .env file in your project root:
LANGSIGHT_URL=http://localhost:8000
LANGSIGHT_API_KEY=ls_your_key_here
LANGSIGHT_PROJECT_ID=your_project_id_here
auto_patch() loads .env automatically (via python-dotenv if installed).
LANGSIGHT_PROJECT_ID explained:
- Required for dashboard visibility. Without it, spans are ingested but not scoped to any project — they won’t appear when you select a project in the dashboard.
- Where to find it: Dashboard → Settings → Projects → click the ID pill to copy.
- For SDK/API key users: If your API key is project-scoped, the key’s project takes precedence and you can omit this variable.
Verifying it works
After running your crew, you should see:
1. Startup log
auto_patch.crewai.event_bus status=active
auto_patch.complete patched=['anthropic', 'crewai']
If you see event_bus_failed instead, check the error — usually a CrewAI version issue.
2. Dashboard — Sessions page
Navigate to your project in the dashboard. Within a few seconds of the crew starting:
- A new session appears in the Sessions page with status
running
- The Live page shows spans arriving in real time
- Agent names, tool calls, and latencies populate as the crew executes
After crew.kickoff() returns:
- The session status changes to
success (or tool_failure / loop_detected if issues occurred)
- Token counts and cost appear (from the Anthropic/OpenAI SDK patches)
3. CLI verification
langsight sessions --hours 1
Session Agent Calls Failed Duration
bced92bc… crew 24 0 2m 40s
When spans appear
- During execution: Spans are buffered in-process and flushed periodically (every ~1 second when an async event loop is running). You’ll see them in the Live view within seconds.
- After completion: A final flush runs when
crew.kickoff() returns. All remaining buffered spans are delivered.
- On process exit: An
atexit handler ensures any un-flushed spans are sent even if kickoff raised an exception.
If the Live page stays empty for more than 10 seconds after kickoff starts, check the Troubleshooting section.
What gets captured
LangSight subscribes to 19 events on CrewAI’s native event bus — the same mechanism CrewAI uses for its own telemetry. Every span carries agent_role, task_name, and session_id automatically.
Crew lifecycle
| Event | Span | Data captured |
|---|
CrewKickoffStarted | crew:<name> | Crew name, inputs, auto-generated session ID |
CrewKickoffCompleted | crew:<name> | Output, total tokens, latency |
CrewKickoffFailed | crew:<name> | Error message |
Task lifecycle
| Event | Span | Data captured |
|---|
TaskStarted | task:<description> | Task name/description, assigned agent |
TaskCompleted | task:<description> | Task output, agent role, latency |
TaskFailed | task:<description> | Error message, agent role |
Agent execution
| Event | Span | Data captured |
|---|
AgentExecutionStarted | agent:<role> | Agent role, task, tools, task prompt |
AgentExecutionCompleted | agent:<role> | Output, latency |
AgentExecutionError | agent:<role> | Error message |
| Event | Span | Data captured |
|---|
ToolUsageStarted | Tool name | Tool name, agent role, tool args |
ToolUsageFinished | Tool name | Output, started_at/finished_at, cache status, agent role |
ToolUsageError | Tool name | Error, tool args, agent role |
MCP tools (mcp__server__tool) are automatically parsed into server_name + tool_name.
LLM calls
| Event | Span | Data captured |
|---|
LLMCallStarted | llm:<model> | Model, messages, tools |
LLMCallCompleted | llm:<model> | Response, model, agent role, task name |
LLMCallFailed | llm:<model> | Error, model, agent role |
Token counts and cost come from the Anthropic/OpenAI SDK patches which remain active alongside the event bus.
Agent-to-Agent delegation (handoffs)
| Event | Span type | Data captured |
|---|
A2ADelegationStarted/Completed | handoff | From agent, target agent, endpoint, task description, result |
A2AConversationStarted/Completed | agent | A2A agent name, total turns, final result |
Internal delegation (via allow_delegation=True) is captured automatically as tool calls (DelegateWorkTool, AskQuestionTool).
How it works
langsight.auto_patch()
│
├── Detects CrewAI >= 1.5
│ └── Registers LangSightCrewAIEventListener on crewai_event_bus
│ └── 19 event handlers (crew + task + agent + tool + LLM + A2A)
│
├── Patches Agent.kickoff (all versions)
│ └── Sets _agent_ctx for Anthropic/OpenAI SDK LLM attribution
│
├── Patches Crew.kickoff (all versions)
│ └── Auto-generates session_id + flushes spans after kickoff
│
└── Fallback: patches BaseTool.run (CrewAI < 1.5 only)
On older CrewAI (before the event bus existed), LangSight falls back to
monkey-patching BaseTool.run, Agent.kickoff, and Crew.kickoff.
auto_patch() vs LangSightCrewAICallback
There are two integration methods. Use auto_patch() unless you have a specific reason not to.
| auto_patch() | LangSightCrewAICallback |
|---|
| Setup | 1 line | ~10 lines |
| What it captures | All 19 events (crew, task, agent, tool, LLM, A2A) | Tool calls only |
| Session grouping | Automatic | Manual (you manage session_id) |
| LLM token/cost | Automatic (patches Anthropic/OpenAI SDK) | Not included |
| Agent attribution | Automatic | Fixed (single agent_name) |
| Use case | All new projects | Legacy setups, per-agent server_name override |
When to use LangSightCrewAICallback
Only when you need per-agent configuration that auto_patch() doesn’t support — for example, routing different agents’ tool calls to different server_name values:
from langsight.sdk import LangSightClient
from langsight.integrations.crewai import LangSightCrewAICallback
client = LangSightClient(
url="http://localhost:8000",
project_id="my-project",
redact_payloads=True,
)
callback = LangSightCrewAICallback(
client=client,
server_name="my-mcp-server",
agent_name="support-agent",
)
agent = Agent(
role="Support Agent",
tools=[my_mcp_tool],
callbacks=[callback],
)
Do not use both auto_patch() and LangSightCrewAICallback together — you’ll get duplicate spans.
Competitor comparison
How does LangSight’s CrewAI support compare to other observability platforms?
Feature matrix
| Capability | LangSight | Langfuse | LangSmith | Arize Phoenix | AgentOps |
|---|
| Setup | 1 line | ~8 lines | ~10 lines | ~8 lines | ~2 lines |
| Integration method | Native event bus | OpenInference (OTel) | OTel + SpanProcessor | OpenInference (OTel) | Native event bus |
| Crew lifecycle | Yes | Yes | Yes | Yes | Yes |
| Task lifecycle | Yes | Yes | Likely | Yes | Yes |
| Agent execution | Yes | Yes | Likely | Yes | Yes |
| Tool calls | Yes (with args, output, cache) | Yes | Likely | Yes | Yes |
| LLM calls | Yes (with tokens, cost) | Yes | Yes (separate instrumentor) | Yes | Yes |
| Agent handoffs (A2A) | Yes (native) | No | No | No | No |
| MCP tool attribution | Yes (auto-parsed) | No | No | No | No |
| Token/cost per call | Yes (via SDK patch) | Partial | Partial | Partial | Yes |
| Memory operations | Planned | Yes | No | Yes | Unknown |
| Self-hosted | Yes (Apache 2.0) | Yes (MIT) | No (SaaS) | Yes (Apache 2.0) | No (SaaS) |
| Forward-compatible | Yes (event bus) | Fragile (patches 12+ methods) | Fragile (OTel instrumentor) | Fragile (patches 12+ methods) | Yes (event bus) |
What makes LangSight different
Native event bus integration. LangSight uses the same crewai_event_bus that CrewAI’s own telemetry uses. This is forward-compatible — internal method refactors won’t break tracing. Only LangSight and AgentOps use this approach. Langfuse, LangSmith, and Arize Phoenix all use the openinference-instrumentation-crewai package which monkey-patches 12+ internal methods.
A2A delegation tracking. LangSight is the only observability tool that captures CrewAI’s Agent-to-Agent protocol events. When one agent delegates to another via A2A, LangSight records a handoff span with the source agent, target agent, endpoint, and result.
MCP tool attribution. CrewAI tools following the mcp__server__tool naming convention are automatically parsed into separate server_name and tool_name fields. No other tool does this.
Zero-code, truly. One import + one function call. No decorators on agents, no callback injection, no wrapper classes. Langfuse requires wrapping crew.kickoff() in a start_as_current_observation() context. LangSmith needs two separate instrumentors.
What competitors capture that we don’t (yet)
| Feature | Who has it | LangSight status |
|---|
| Memory operations (save/search) | Langfuse, Arize Phoenix | Planned (Phase 2) |
| Knowledge base queries | Arize Phoenix | Planned (Phase 2) |
| Flow lifecycle (CrewAI Flows) | Arize Phoenix | Planned (Phase 2) |
| Agent reasoning steps | Arize Phoenix | Planned (Phase 2) |
| Per-agent/per-task cost rollup | AgentOps | Planned |
Optional: explicit session name
Auto-generated UUID sessions work for most cases. Use langsight.session() when
you want a named, queryable session:
import langsight
langsight.auto_patch()
from crewai import Agent, Task, Crew
crew = Crew(agents=[...], tasks=[...])
with langsight.session(session_id="q1-revenue-analysis"):
result = crew.kickoff()
Troubleshooting
No spans in the dashboard
- LangSight server running?
curl http://localhost:8000/api/liveness
- Env vars set? Check
LANGSIGHT_URL and LANGSIGHT_API_KEY. Check LANGSIGHT_PROJECT_ID — without it, spans are ingested but not visible in any project.
- Check startup logs: Look for
auto_patch.crewai.event_bus status=active. If you see event_bus_failed, the error will explain why.
tracing=False in Crew constructor? This disables CrewAI’s event bus entirely. Remove it — LangSight needs the event bus to capture crew/task/agent/tool events. Without it, only raw LLM calls (via SDK patches) are captured, with no session grouping.
- CrewAI version:
python -c "import crewai; print(crewai.__version__)" — event bus requires 1.5+.
- Python version:
python --version — LangSight requires Python 3.11+.
Spans appear only after process exits
This is normal if the crew runs for less than 1 second (the flush loop fires every ~1s). For longer crews, spans appear during execution. If they consistently only appear at exit, the async flush loop may not be starting — check that python-dotenv is installed (uv add python-dotenv) so env vars load before CrewAI initialises its event loop.
Duplicate spans
Using both auto_patch() AND a manual LangSightCrewAICallback? Pick one — auto_patch() is recommended.
LLM calls missing agent name
The Anthropic/OpenAI SDK patches use contextvars which don’t propagate across threads. The event bus LLM events always carry agent_role — those spans will have correct attribution regardless.
Session shows wrong duration
If session duration looks much longer than the actual run (e.g., 2 hours when the crew ran for 2 minutes), check that your system clock and timezone settings are correct. LangSight converts CrewAI’s naive timestamps to UTC — a misconfigured system timezone can cause offset issues.