Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langsight.dev/llms.txt

Use this file to discover all available pages before exploring further.

Usage

langsight costs [OPTIONS]

Options

OptionDefaultDescription
--configauto-discoveredPath to .langsight.yaml
--window, -w24hLook-back window (24h, 7d, 30d)
--projectallFilter to a specific project ID
--jsonfalseOutput as JSON
langsight costs requires the ClickHouse backend and the LangSight SDK or OTLP integration to be sending spans. The default mode: dual includes ClickHouse — start it with docker compose up -d. On mode: postgres only it shows “No span data found”.

Example

langsight costs --window 7d
Cost Breakdown  (last 7d — project: default)
──────────────────────────────────────────────────────────────────────
Total: $58.42   LLM: $55.10   Tool calls: $3.32
Tokens: 11,200,000 input  /  2,840,000 output

By model:
Model                    Input tokens   Output tokens   Cost
claude-sonnet-4-6        4,200,000       980,000       $24.18
gpt-4o                   3,800,000       720,000       $19.80
gemini-1.5-pro           2,100,000       640,000        $8.61
claude-haiku-4-5           800,000       290,000        $2.51
gpt-4o-mini              <other>                        —

By tool call (non-LLM):
Server          Tool               Calls     $/call   Total
jira-mcp        get_issue          2,100    $0.0010   $2.10
slack-mcp       post_message         850    $0.0010   $0.85
postgres-mcp    query                150    $0.0000   $0.00

How costs are calculated

LLM spans

Token-based pricing: (input_tokens / 1,000,000 × input_price) + (output_tokens / 1,000,000 × output_price) Prices are stored in the database and pre-seeded for 21 models. Update them in the dashboard under Settings → Model Pricing. Supported models (pre-seeded):
  • Anthropic: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
  • OpenAI: gpt-4o, gpt-4o-mini, o3, o3-mini, o1
  • Google: gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash, gemini-2.5-pro
  • AWS Bedrock: nova-pro, nova-lite, nova-micro
  • Meta: llama-3.1-70b, llama-3.3-70b, and others

Tool call spans

Fixed cost_per_call rules from .langsight.yaml (applies to non-LLM MCP tool calls):
costs:
  rules:
    - server: "*"
      tool: "*"
      cost_per_call: 0.001

OTLP token extraction

If you send OTLP traces, LangSight extracts token counts and model ID automatically from span attributes:
  • gen_ai.usage.input_tokens
  • gen_ai.usage.output_tokens
  • gen_ai.request.model
No SDK changes are needed if your framework already emits these attributes (LangChain, LlamaIndex, etc. all do).

API endpoint

GET /api/costs/breakdown?project_id=my-project&window=7d
See API Reference for the full response schema.