log_request
method when:
pl_client.run()
for executing promptslog_request
API, see the Log Request API Reference.
provider
(required): The LLM provider name (e.g., “openai”, “anthropic”)model
(required): The specific model used (e.g., “gpt-4o”, “claude-3-7-sonnet-20250219”)input
(required): The input prompt in Prompt Blueprint formatoutput
(required): The model response in Prompt Blueprint formatrequest_start_time
: Timestamp when the request startedrequest_end_time
: Timestamp when the response was receivedprompt_name
: Name of the prompt template if using one from PromptLayerprompt_id
: Unique identifier for the prompt templateprompt_version_number
: Version number of the prompt templateprompt_input_variables
: Variables used in the prompt templateinput_tokens
: Number of input tokens usedoutput_tokens
: Number of output tokens generatedtags
: Array of strings for categorizing requestsmetadata
: Custom JSON object for ability to search and filter requests laterinput
and output
must be in prompt blueprint format: