March 05, 2026
New Features
OpenTelemetry Trace Ingestion Native support for industry-standard OpenTelemetry Protocol (OTLP) trace ingestion, enabling seamless integration with existing observability tooling.- Ingest traces via standard OTLP/HTTP endpoint at
/v1/traces - Automatic extraction of GenAI semantic conventions for OpenAI and Anthropic providers
- Convert OTLP spans into PromptLayer request logs with proper error mapping and metadata preservation
- Import and replay conversations with parallel tool calls from request logs
- Maintain correct message ordering when tools are invoked across conversation turns
- Proper hydration of chat history with multiple tool responses per assistant turn
- Reset and re-seed chat from any logged request with one click
- Automatically diff request messages against current template to extract conversation context
- Per-variable-set chat history support for testing multiple scenarios simultaneously
Improvements
- Fixed playground chat crashes when trace metadata contains non-string values during URL sharing
- Resolved 500 errors when reading prompts that use legacy LangChain message format
- Fixed “No response” display issue for template render errors in request logs
- Improved image evaluation algorithm accuracy for visual content comparison
- Enhanced workspace member invitation dialog with better field validation
- Fixed chat message ordering when importing request logs with tool calls
March 04, 2026
New Features
Google File Search Tool Support Native integration with Google’s File Search tool for Gemini models, enabling document-based context retrieval.- Create and manage file search stores directly in the PromptLayer UI
- Upload documents to stores and associate them with prompts in the playground
- Documents are automatically indexed for semantic search during conversations
- Grounding metadata shows which documents were referenced in responses
- Configure MCP servers and tools through the built-in tools dialog
- Available for OpenAI models that support function calling
- Tool responses appear inline in conversation history
- Author information displayed for prompts, datasets, evaluations, and notifications
- Filter resources by creator in the unified registry
- “Open Original Session” button on run requests links back to the source playground session
Improvements
- Added support for Claude Sonnet 4.5 on Amazon Bedrock
- Added support for Gemini 3.1 Flash Lite model
- Debounced playground input variable parsing to reduce API calls during typing
- Fixed issue where deleted file stores could still be selected in the UI
- Improved search indexing with deduplication to prevent duplicate results
- Redesigned settings navigation with clearer organization and visual hierarchy
- Enhanced vector store management with delete store capability
- Improved file preview URLs for local storage backends with HMAC-signed streaming
March 03, 2026
New Features
Anthropic Structured Output Support Added JSON Schema support for Anthropic models to enforce structured responses.- Configure
response_formatwith JSON Schema in prompt templates for Claude models - Automatically converts to Anthropic’s
output_configformat - Also supported for Claude models running on AWS Bedrock
- View all workspaces and roles for each organization member in a detailed side panel
- Filter members by workspace, role, or search by name/email
- Members can now remove themselves from organizations without owner permissions
Improvements
- Fixed score slider to properly handle integer-only scores
- Added workspace search by name in workspace listing
- Improved autocomplete components with better keyboard navigation and multi-select support
- Enhanced request display to show
error_typeanderror_messagefields when present - Added validation for
error_typefield in/track-requestendpoint to match/log-requestbehavior - Fixed memory leak in scheduled job processing
March 01, 2026
Improvements
- Conversation simulator now surfaces errors from follow-up turns instead of silently ending conversations, making it easier to diagnose multi-turn evaluation failures
- Request logs with warning status now display partial responses when available, providing visibility into requests that partially succeeded
- Fixed display logic to correctly identify the final assistant response in multi-turn conversations, ensuring request context and actual output are properly distinguished
- Reduced backend test parallelization to improve test stability and reliability
February 28, 2026
New Features
Public API Request Payload Endpoint New/api/public/v2/request-payload endpoint allows you to retrieve complete request details including prompt blueprints, token usage, and latency metrics.
- Returns full prompt blueprint structure for easy reproduction
- Includes comprehensive metadata: provider, model, tokens, pricing, and timing
- Supports both JWT and API key authentication
Improvements
- Improved Playground reliability on slow network connections by buffering early messages to prevent UI stalls
- Enhanced error handling for WebSocket token refresh failures with better logging for troubleshooting
- Fixed race condition in report cell generation that could cause false failures under high concurrency
- Improved WebSocket connection stability by returning cached tokens when refresh attempts fail
- Enhanced error reporting for messaging service failures with clearer error messages and categorization
February 27, 2026
New Features
OpenAI Images API Support Full support for OpenAI’s image generation models includinggpt-image-1, gpt-image-1-mini, gpt-image-1.5, dall-e-3, and dall-e-2.
- Configure quality, size, background, output format, and moderation settings directly in the Playground
- Generate multiple images in a single request with
nparameter control - View generated images with revised prompts in dedicated accordion sections
gemini-3.1-flash-image-preview model for AI-generated images via Google/Vertex AI.
- Customize image size (0.5K to 4K) and aspect ratio (1:1, 16:9, 21:9, and more)
- Includes standard Gemini safety settings and generation parameters
- Extract and analyze content from web pages during conversations
- Matches existing functionality available for OpenAI models
- Automatically recalculates report scores when evaluation criteria are updated
- Prevents score updates on incomplete evaluations
Improvements
- Fixed WebSocket connection timing to establish only after authentication token is available
- Increased message history buffer to 400 messages for improved chat continuity
- Resolved dynamic resolution stack errors in evaluation workflows
- Enhanced Playground sidebar layout with better widget spacing and control bar positioning
- Improved clipboard handling for content copy operations in the editor
- Fixed cost calculations for
nano-banana-2model - Streamlined prompt template retrieval logic for better reliability
February 26, 2026
New Features
OpenAI Images API Support PromptLayer now supports OpenAI’s image generation models includinggpt-image-1, gpt-image-1-mini, gpt-image-1.5, dall-e-3, and dall-e-2.
- Track and log all image generation requests with full parameter support (quality, size, format, moderation)
- View generated images directly in the request logs with revised prompt accordion
- Monitor token-based pricing for new GPT image models
- Added URL context tool support for fetching and processing web content
- Added code execution tool support for running code within model interactions
- Preserved thinking blocks for extended reasoning visibility in responses
- Richer formatting support in chat messages and outputs
- Improved code block rendering with syntax highlighting
- Better handling of complex markdown structures in evaluations and logs
Improvements
- Added human-readable status descriptions in the UI for better request monitoring
- Fixed refresh button behavior in sidebar navigation for consistent state management
- Improved error handling for team member invitations with clearer error messages
- Enhanced clipboard support for copying content from rich text editors
- Fixed prompt analytics page to correctly display evaluations without scores
- Improved evaluation table columns to show more detailed metrics
- Enhanced streaming performance for playground outputs with better state management

