PromptLayer natively supports OpenTelemetry (OTEL), the industry-standard observability framework. You can send traces from any OpenTelemetry-compatible SDK or Collector directly to PromptLayer — no PromptLayer SDK required. This is ideal when:Documentation Index
Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt
Use this file to discover all available pages before exploring further.
- Your framework isn’t listed on the Integrations page
- You already have an OpenTelemetry pipeline and want to add PromptLayer as a destination
- You want vendor-neutral instrumentation
If you’re using a supported framework like the Vercel AI SDK, OpenAI Agents SDK, or Claude Code, see the Integrations page for framework-specific setup — those integrations handle the OTEL configuration for you.
How It Works
PromptLayer exposes an OTLP/HTTP endpoint at:Setup
Configure your OpenTelemetry SDK to export traces to PromptLayer using the OTLP/HTTP exporter.GenAI Semantic Conventions
Spans that use GenAI semantic conventions are automatically parsed into PromptLayer request logs. Add these attributes to your LLM call spans:| Attribute | Description |
|---|---|
gen_ai.request.model | Model name (e.g. gpt-4, claude-sonnet-4-20250514) |
gen_ai.provider.name | Provider (e.g. openai, anthropic) |
gen_ai.operation.name | Operation type (chat, text_completion, embeddings) |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
gen_ai.input.messages | Request messages |
gen_ai.output.messages | Response messages |
gen_ai.request.temperature | Temperature parameter |
gen_ai.request.max_tokens | Max tokens parameter |
gen_ai.response.finish_reasons | Finish reasons |
Event-Based Conventions
PromptLayer also supports the newer event-based GenAI semantic conventions where message content is sent as span events rather than span attributes. This format is used by frameworks like LiveKit and newer versions of OpenTelemetry GenAI instrumentation. The following event types are recognized:| Event Name | Description |
|---|---|
gen_ai.system.message | System message |
gen_ai.user.message | User message |
gen_ai.assistant.message | Assistant message (including tool calls) |
gen_ai.tool.message | Tool/function result message |
gen_ai.choice | Model response/choice |
gen_ai.system.message.content, gen_ai.user.message.content, and tool call data are automatically extracted and mapped to PromptLayer request logs.
When both attribute-based messages (
gen_ai.input.messages) and event-based messages are present on the same span, attribute-based messages take priority.Linking to Prompt Templates
You can associate OTEL spans with prompt templates in your PromptLayer workspace by setting custom span attributes:| Attribute | Type | Description |
|---|---|---|
promptlayer.prompt.name | string | Name of the prompt template |
promptlayer.prompt.id | integer | ID of the prompt template (alternative to name) |
promptlayer.prompt.version | integer | Specific version number (optional) |
promptlayer.prompt.label | string | Label to resolve version (e.g. production) |
Using an OpenTelemetry Collector
If you’re already running an OpenTelemetry Collector, you can add PromptLayer as an additional exporter in your Collector config:Content Types
The endpoint accepts both binary protobuf (application/x-protobuf, recommended) and JSON (application/json) encodings. Both support Content-Encoding: gzip.
Next Steps
- OTLP Ingest Traces API Reference — full endpoint documentation
- Integrations — framework-specific setups (Vercel AI SDK, OpenAI Agents, Claude Code)
- Traces — PromptLayer SDK native tracing with
@traceableandwrapWithSpan

