PromptLayer natively supports OpenTelemetry (OTEL), the industry-standard observability framework. You can send traces from any OpenTelemetry-compatible SDK or Collector directly to PromptLayer — no PromptLayer SDK required.
This is ideal when:
- Your framework isn’t listed on the Integrations page
- You already have an OpenTelemetry pipeline and want to add PromptLayer as a destination
- You want vendor-neutral instrumentation
How It Works
PromptLayer exposes an OTLP/HTTP endpoint at:
https://api.promptlayer.com/v1/traces
Any OpenTelemetry SDK or Collector can export traces to this endpoint. Spans that include GenAI semantic convention attributes are automatically converted into PromptLayer request logs.
Setup
Configure your OpenTelemetry SDK to export traces to PromptLayer using the OTLP/HTTP exporter.
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Install required packages:
# pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
exporter = OTLPSpanExporter(
endpoint="https://api.promptlayer.com/v1/traces",
headers={"X-API-KEY": "your-promptlayer-api-key"},
)
provider = TracerProvider(
resource=Resource.create({"service.name": "my-llm-app"})
)
provider.add_span_processor(BatchSpanProcessor(exporter))
# Use the tracer to create spans
tracer = provider.get_tracer("my-llm-app")
GenAI Semantic Conventions
Spans that use GenAI semantic conventions are automatically parsed into PromptLayer request logs. Add these attributes to your LLM call spans:
| Attribute | Description |
|---|
gen_ai.request.model | Model name (e.g. gpt-4, claude-sonnet-4-20250514) |
gen_ai.provider.name | Provider (e.g. openai, anthropic) |
gen_ai.operation.name | Operation type (chat, text_completion, embeddings) |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
gen_ai.input.messages | Request messages |
gen_ai.output.messages | Response messages |
gen_ai.request.temperature | Temperature parameter |
gen_ai.request.max_tokens | Max tokens parameter |
gen_ai.response.finish_reasons | Finish reasons |
Event-Based Conventions
PromptLayer also supports the newer event-based GenAI semantic conventions where message content is sent as span events rather than span attributes. This format is used by frameworks like LiveKit and newer versions of OpenTelemetry GenAI instrumentation.
The following event types are recognized:
| Event Name | Description |
|---|
gen_ai.system.message | System message |
gen_ai.user.message | User message |
gen_ai.assistant.message | Assistant message (including tool calls) |
gen_ai.tool.message | Tool/function result message |
gen_ai.choice | Model response/choice |
Event attributes like gen_ai.system.message.content, gen_ai.user.message.content, and tool call data are automatically extracted and mapped to PromptLayer request logs.
When both attribute-based messages (gen_ai.input.messages) and event-based messages are present on the same span, attribute-based messages take priority.
Linking to Prompt Templates
You can associate OTEL spans with prompt templates in your PromptLayer workspace by setting custom span attributes:
| Attribute | Type | Description |
|---|
promptlayer.prompt.name | string | Name of the prompt template |
promptlayer.prompt.id | integer | ID of the prompt template (alternative to name) |
promptlayer.prompt.version | integer | Specific version number (optional) |
promptlayer.prompt.label | string | Label to resolve version (e.g. production) |
from opentelemetry import trace
tracer = trace.get_tracer("my-llm-app")
with tracer.start_as_current_span("llm-call") as span:
# Link this span to a prompt template
span.set_attribute("promptlayer.prompt.name", "my-prompt")
span.set_attribute("promptlayer.prompt.label", "production")
# Add GenAI attributes
span.set_attribute("gen_ai.request.model", "gpt-4")
span.set_attribute("gen_ai.provider.name", "openai")
# ... make your LLM call ...
Using an OpenTelemetry Collector
If you’re already running an OpenTelemetry Collector, you can add PromptLayer as an additional exporter in your Collector config:
exporters:
otlphttp/promptlayer:
endpoint: "https://api.promptlayer.com"
headers:
X-API-Key: "${PROMPTLAYER_API_KEY}"
service:
pipelines:
traces:
exporters: [otlphttp/promptlayer]
This lets you fan out traces to PromptLayer alongside your existing observability backends (Datadog, New Relic, Jaeger, etc.) without changing your application code.
Content Types
The endpoint accepts both binary protobuf (application/x-protobuf, recommended) and JSON (application/json) encodings. Both support Content-Encoding: gzip.
Next Steps