OpenAI or AsyncOpenAI client with wrap_llm() to auto-trace every chat.completions.create() call and tool use block in the response.
Using OpenAI Agents SDK? See the dedicated OpenAI Agents integration — lifecycle hooks cover the full agent runner.
Installation
Quick start
client.chat.completions.create() (sync) and client.chat.completions.acreate() (async).
What gets traced
| Span | Fields |
|---|---|
| LLM generation | server_name="openai", tool_name="generate/gpt-4o", span_type="agent", tokens, model_id |
| Tool call (per tool) | tool_name from response, span_type="tool_call", parent_span_id, input_args |