POST
https://api.promptlayer.com
/
prompt-templates
/
{identifier}
curl --request POST \
  --url https://api.promptlayer.com/prompt-templates/{identifier} \
  --header 'Content-Type: application/json' \
  --header 'X-API-KEY: <x-api-key>' \
  --data '{
  "version": 1,
  "workspace_id": 123,
  "label": "<string>",
  "provider": "openai",
  "input_variables": {},
  "metadata_filters": {}
}'
{
  "id": 123,
  "prompt_name": "<string>",
  "prompt_template": {
    "content": [
      {
        "type": "text",
        "text": "<string>"
      }
    ],
    "input_variables": [],
    "template_format": "f-string",
    "type": "completion"
  },
  "metadata": {
    "model": {
      "provider": "<string>",
      "name": "<string>",
      "parameters": {}
    },
    "customField": "<string>"
  },
  "commit_message": "<string>",
  "llm_kwargs": {},
  "version": 123
}

Retrieve a prompt template using either the prompt_name or prompt_id. Optionally, specify version (version number) or label (release label like “prod”) to retrieve a specific version. If not specified, the latest version is returned.

PromptLayer will try to read the model provider from the parameters you attached to the prompt template. You can optionally pass in a provider to override the one set in the Prompt Registry. This will return LLM-specific arguments that can be passed directly into your LLM client. To format the template with input variables, use input_variables.

Headers

X-API-KEY
string
required

Path Parameters

identifier
string
required

The identifier can be either the prompt name or the prompt id.

Body

application/json

Response

200
application/json

Successful Response

The response is of type object.