Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt

Use this file to discover all available pages before exploring further.

If your agent framework already emits OpenTelemetry spans, point its exporter at LangSight’s OTLP endpoint. No code changes required.

Setup

# Required
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8000/api/traces/otlp

# Optional — scope spans to a project
export LANGSIGHT_PROJECT_ID=my-project
Or via the OTEL Collector (production):
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

Framework examples

from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor

tracer_provider = TracerProvider()
tracer_provider.add_span_processor(
    BatchSpanProcessor(
        OTLPSpanExporter(endpoint="http://localhost:8000/api/traces/otlp")
    )
)

Span detection

LangSight extracts MCP tool calls from OTLP spans by looking for:
  1. Span name matching mcp.* (e.g. mcp.postgres-mcp.query)
  2. Attribute gen_ai.tool.name (OpenAI Agents SDK / GenAI semantic conventions)
  3. Attribute mcp.server.name + mcp.tool.name
Non-MCP spans are silently ignored.

OTEL Collector (production)

For production, use the included OTEL Collector config:
docker compose up -d otel-collector
The collector receives on ports 4317 (gRPC) and 4318 (HTTP), filters to MCP spans, and forwards to LangSight.

Token cost attributes

LangSight extracts token counts from OTEL spans automatically if your framework emits them:
OTLP attributeMaps toUsed for
gen_ai.usage.input_tokensinput_tokensLLM cost calculation
gen_ai.usage.output_tokensoutput_tokensLLM cost calculation
gen_ai.request.modelmodel_idMatches against model pricing table
If these are present, the cost breakdown will use token-based pricing ($/1M tokens) instead of flat per-call pricing.