Skip to main content
POST
/
log-request
Log Request
curl --request POST \
  --url https://api.promptlayer.com/log-request \
  --header 'Content-Type: application/json' \
  --header 'X-API-KEY: <x-api-key>' \
  --data '
{
  "provider": "<string>",
  "model": "<string>",
  "input": {
    "content": [
      {
        "text": "<string>",
        "type": "text"
      }
    ],
    "input_variables": [],
    "template_format": "f-string",
    "type": "completion"
  },
  "output": {
    "content": [
      {
        "text": "<string>",
        "type": "text"
      }
    ],
    "input_variables": [],
    "template_format": "f-string",
    "type": "completion"
  },
  "request_start_time": "2023-11-07T05:31:56Z",
  "request_end_time": "2023-11-07T05:31:56Z",
  "parameters": {},
  "tags": [],
  "metadata": {},
  "prompt_name": "<string>",
  "prompt_id": 123,
  "prompt_version_number": 1,
  "prompt_input_variables": {},
  "input_tokens": 0,
  "output_tokens": 0,
  "price": 0,
  "function_name": "",
  "score": 0,
  "api_type": "<string>",
  "status": "SUCCESS",
  "error_type": "PROVIDER_TIMEOUT",
  "error_message": "<string>"
}
'
{
  "id": 123,
  "prompt_version": {
    "prompt_template": {
      "content": [
        {
          "text": "<string>",
          "type": "text"
        }
      ],
      "input_variables": [],
      "template_format": "f-string",
      "type": "completion"
    },
    "commit_message": "<string>",
    "metadata": {
      "model": {
        "provider": "<string>",
        "name": "<string>",
        "parameters": {}
      },
      "customField": "<string>"
    }
  },
  "status": "SUCCESS",
  "error_type": "<string>",
  "error_message": "<string>"
}
Log a request to the system. This is useful for logging requests from custom LLM providers.

Using Structured Outputs

When logging requests that use structured outputs (JSON schemas), include the schema configuration in the parameters field using the response_format.json_schema structure. Example:
{
  "provider": "openai",
  "model": "gpt-4",
  "api_type": "chat-completions"
  "parameters": {
    "temperature": 0.7,
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "YourSchemaName",
        "schema": {
          "type": "object",
          "properties": {
            "field1": {"type": "string"}
          },
          "required": ["field1"]
        }
      }
    }
  }
}
For complete examples with OpenAI, Anthropic, Google Gemini, and detailed implementation guidance, see: Logging Structured Outputs Guide →

Logging Tools (Function Definitions)

When logging requests that include tool/function definitions, include the tools array directly in the input field. This allows request replay to use the exact tools from the original request. Example:
{
  "provider": "openai",
  "model": "gpt-4o",
  "input": {
    "type": "chat",
    "messages": [
      {
        "role": "user",
        "content": [{"type": "text", "text": "What's the weather in NYC?"}]
      }
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get the current weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "City name"
              }
            },
            "required": ["location"]
          }
        }
      }
    ],
    "tool_choice": "auto"
  },
  "output": {
    "type": "chat",
    "messages": [
      {
        "role": "assistant",
        "content": [],
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"location\": \"NYC\"}"
            }
          }
        ]
      }
    ]
  },
  "request_start_time": "2024-04-03T20:57:25+00:00",
  "request_end_time": "2024-04-03T20:57:26+00:00"
}

Tool Choice Options

You can control which tool the model should use with tool_choice:
  • "auto" - Model decides whether to use a tool
  • "none" - Model will not call any tools
  • "required" - Model must call at least one tool
  • {"type": "function", "function": {"name": "get_weather"}} - Force a specific tool
For logging tool call responses, see the Custom Logging Guide.

Using Extended Thinking / Reasoning

When logging requests that use extended thinking (Anthropic), thinking mode (Google), or reasoning (OpenAI), the configuration must be passed inside the parameters field using provider-specific formats:
ProviderParameterExample
Anthropicthinking{"thinking": {"type": "enabled", "budget_tokens": 10000}}
Googlethinking_config{"thinking_config": {"include_thoughts": true, "thinking_budget": 8000}}
OpenAIreasoning_effort{"reasoning_effort": "high"}
For complete examples with thinking content blocks and full code samples, see: Logging Extended Thinking Guide →

Error Tracking

You can log failed or problematic requests using the status, error_type, and error_message fields. This is useful for monitoring error rates, debugging issues, and tracking provider reliability.

Example: Logging a Failed Request

{
  "provider": "openai",
  "model": "gpt-4",
  "api_type": "chat-completions",
  "input": {
    "type": "chat",
    "messages": [{"role": "user", "content": "Hello"}]
  },
  "output": {
    "type": "chat",
    "messages": []
  },
  "request_start_time": "2024-01-15T10:30:00Z",
  "request_end_time": "2024-01-15T10:30:30Z",
  "status": "ERROR",
  "error_type": "PROVIDER_TIMEOUT",
  "error_message": "Request timed out after 30 seconds"
}

Example: Logging a Warning

Use WARNING status for requests that succeeded but had issues (e.g., retries, degraded responses):
{
  "provider": "anthropic",
  "model": "claude-3-sonnet",
  "api_type": "chat-completions",
  "input": {
    "type": "chat",
    "messages": [{"role": "user", "content": "Summarize this"}]
  },
  "output": {
    "type": "chat",
    "messages": [{"role": "assistant", "content": "Summary..."}]
  },
  "request_start_time": "2024-01-15T10:30:00Z",
  "request_end_time": "2024-01-15T10:30:05Z",
  "status": "WARNING",
  "error_type": "PROVIDER_RATE_LIMIT",
  "error_message": "Succeeded after 2 retries due to rate limiting"
}

Headers

X-API-KEY
string
required

API key to authorize the operation.

Body

application/json
provider
string
required
model
string
required
input
Completion Template · object
required
output
Completion Template · object
required
request_start_time
string<date-time>
required
request_end_time
string<date-time>
required
parameters
Parameters · object

Model parameters including temperature, max_tokens, etc. Can also include structured output configuration via response_format.json_schema. See documentation for structured output examples.

tags
string[]
Maximum string length: 512
metadata
Metadata · object

Custom key-value pairs for tracking additional request information. Keys are limited to 1024 characters.

prompt_name
string | null
prompt_id
integer | null

The ID of the prompt template used for this request. This is useful for tracking which prompt was used in the request.

prompt_version_number
integer | null
Required range: x > 0
prompt_input_variables
Prompt Input Variables · object
input_tokens
integer
default:0
Required range: x >= 0
output_tokens
integer
default:0
Required range: x >= 0
price
number
default:0
Required range: x >= 0
function_name
string
default:""
score
integer
default:0
Required range: 0 <= x <= 100
api_type
string | null
status
enum<string>
default:SUCCESS

Request status.

ValueDescription
SUCCESSRequest completed successfully (default)
WARNINGRequest succeeded but had issues (e.g., retries, degraded response)
ERRORRequest failed
Available options:
SUCCESS,
WARNING,
ERROR
error_type
enum<string> | null

Categorized error type.

ValueDescriptionAllowed Statuses
PROVIDER_RATE_LIMITRate limit hit on provider APIWARNING, ERROR
PROVIDER_QUOTA_LIMITAccount quota or spending limit exceededWARNING, ERROR
VARIABLE_MISSING_OR_EMPTYRequired template variable was missing or emptyWARNING
PROVIDER_TIMEOUTRequest timed outERROR
PROVIDER_AUTH_ERRORAuthentication failed with providerERROR
PROVIDER_ERRORGeneral provider-side errorERROR
TEMPLATE_RENDER_ERRORFailed to render prompt templateERROR
UNKNOWN_ERRORUncategorized errorWARNING, ERROR
Available options:
PROVIDER_TIMEOUT,
PROVIDER_QUOTA_LIMIT,
PROVIDER_RATE_LIMIT,
PROVIDER_AUTH_ERROR,
PROVIDER_ERROR,
TEMPLATE_RENDER_ERROR,
VARIABLE_MISSING_OR_EMPTY,
UNKNOWN_ERROR
error_message
string | null

Detailed error message describing what went wrong. Maximum 1024 characters.

Maximum string length: 1024

Response

Successful Response

id
integer
required
prompt_version
PromptVersion · object
required
status
enum<string>

Request status indicating success, warning, or error.

Available options:
SUCCESS,
WARNING,
ERROR
error_type
string | null

Categorized error type if status is WARNING or ERROR.

error_message
string | null

Detailed error message if status is WARNING or ERROR.