Documentation Index
Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt
Use this file to discover all available pages before exploring further.
Usage
Options
| Option | Default | Description |
|---|---|---|
--config | auto-discovered | Path to .langsight.yaml |
--window, -w | 24h | Look-back window (24h, 7d, 30d) |
--project | all | Filter to a specific project ID |
--json | false | Output as JSON |
Example
How costs are calculated
LLM spans
Token-based pricing:(input_tokens / 1,000,000 × input_price) + (output_tokens / 1,000,000 × output_price)
Prices are stored in the database and pre-seeded for 21 models. Update them in the dashboard under Settings → Model Pricing.
Supported models (pre-seeded):
- Anthropic: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
- OpenAI: gpt-4o, gpt-4o-mini, o3, o3-mini, o1
- Google: gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash, gemini-2.5-pro
- AWS Bedrock: nova-pro, nova-lite, nova-micro
- Meta: llama-3.1-70b, llama-3.3-70b, and others
Tool call spans
Fixedcost_per_call rules from .langsight.yaml (applies to non-LLM MCP tool calls):
OTLP token extraction
If you send OTLP traces, LangSight extracts token counts and model ID automatically from span attributes:gen_ai.usage.input_tokensgen_ai.usage.output_tokensgen_ai.request.model