Prompt Blueprints are a core concept in PromptLayer that provides a standardized, model-agnostic representation of prompts. They serve as an abstraction layer that:
Think of Prompt Blueprints as a universal language for LLM interactions that shields your application from provider-specific implementation details.
Instead of accessing the raw LLM response via response["raw_response"]
, it’s recommended to use the standardized response["prompt_blueprint"]
. This ensures consistency across different providers.
With this approach, you can update from one provider to another (e.g., OpenAI to Anthropic) without any code changes.
Placeholder Messages are a powerful feature that allows you to inject messages into a prompt template at runtime. By using the placeholder
role, you can define placeholders within your prompt template that can be replaced with full messages when the prompt is executed.
For more detailed information on Placeholder Messages, including how to create and use them, please refer to our dedicated Placeholder Messages Documentation page.
When running a prompt that includes placeholders, you need to supply the messages that will replace the placeholders in the input variables.
Note: The messages provided must conform to the Prompt Blueprint format.
Each message in a Prompt Blueprint should be a dictionary with the following structure:
role
: The role of the message sender (user
, assistant
, etc.).content
: A list of content items, where each item has:
type
: The type of content (text
, thinking
, media
, etc.).text
: The text content (if type
is text
).thinking
: The thinking content (if type
is thinking
).signature
: The signature content (if type
is thinking
).You’re absolutely right - let me update that part of the documentation to show a more realistic tool response format that uses structured JSON data. Here’s the revised section:
The Prompt Blueprint supports tool and function calling capabilities. This section demonstrates how to define available tools, handle assistant tool calls, and provide tool responses.
When creating a prompt template, you can specify available tools under the tools
field. Each tool definition follows this structure:
The parameters
field is of interest because it specifies the expected input parameters for the function. The LLM provider will use this information to generate the appropriate tool call. You can define the parameters
using JSON Schema format. You can read moe about how OpenAI uses JSON Schema for defining parameters here. And you can read more about how Anthropic uses JSON Schema for defining parameters here.
When the assistant decides to use a tool, the response will include a tool_calls
field in the message. The format is:
id
is used by the assistant to track the tool call.type
is always function
.function
contains the function details
name
tells us which function to callarguments
is a JSON string containing the function’s input parameters.For more information about how PromptLayer structures tool calls, please refer to schema definition towards end of this page.
After executing the requested function, you can provide the result back to the assistant using a “tool” role message. The response should be structured JSON data:
Here is an example of how to log a request with tool calls and responses using OpenAI:
PromptLayer supports any number of modalities in a single prompt. You can include text, images, videos, and other media types in your prompt templates.
The media_variable
content allows you to dynamically insert a list of medias into prompt template messages.
The media_variable
is nested within the message content. The type
and name
are required fields specifying the type of content and the name of the variable, respectively. The name
is the name of the list of medias to be dynamically inserted.
When defining a prompt template, you can specify an media_variable
to dynamically include medias in your messages.
media
is a list of strings, they can either be public URLs or base64 strings.Prompt Blueprints can be configured to produce structured outputs that follow a specific format defined by JSON Schema. This ensures consistent response formats that are easier to parse and integrate with your applications.
For detailed information on creating and using structured outputs with your prompt templates, see our Structured Outputs documentation.
The schema is of type object
.
Prompt Blueprints are a core concept in PromptLayer that provides a standardized, model-agnostic representation of prompts. They serve as an abstraction layer that:
Think of Prompt Blueprints as a universal language for LLM interactions that shields your application from provider-specific implementation details.
Instead of accessing the raw LLM response via response["raw_response"]
, it’s recommended to use the standardized response["prompt_blueprint"]
. This ensures consistency across different providers.
With this approach, you can update from one provider to another (e.g., OpenAI to Anthropic) without any code changes.
Placeholder Messages are a powerful feature that allows you to inject messages into a prompt template at runtime. By using the placeholder
role, you can define placeholders within your prompt template that can be replaced with full messages when the prompt is executed.
For more detailed information on Placeholder Messages, including how to create and use them, please refer to our dedicated Placeholder Messages Documentation page.
When running a prompt that includes placeholders, you need to supply the messages that will replace the placeholders in the input variables.
Note: The messages provided must conform to the Prompt Blueprint format.
Each message in a Prompt Blueprint should be a dictionary with the following structure:
role
: The role of the message sender (user
, assistant
, etc.).content
: A list of content items, where each item has:
type
: The type of content (text
, thinking
, media
, etc.).text
: The text content (if type
is text
).thinking
: The thinking content (if type
is thinking
).signature
: The signature content (if type
is thinking
).You’re absolutely right - let me update that part of the documentation to show a more realistic tool response format that uses structured JSON data. Here’s the revised section:
The Prompt Blueprint supports tool and function calling capabilities. This section demonstrates how to define available tools, handle assistant tool calls, and provide tool responses.
When creating a prompt template, you can specify available tools under the tools
field. Each tool definition follows this structure:
The parameters
field is of interest because it specifies the expected input parameters for the function. The LLM provider will use this information to generate the appropriate tool call. You can define the parameters
using JSON Schema format. You can read moe about how OpenAI uses JSON Schema for defining parameters here. And you can read more about how Anthropic uses JSON Schema for defining parameters here.
When the assistant decides to use a tool, the response will include a tool_calls
field in the message. The format is:
id
is used by the assistant to track the tool call.type
is always function
.function
contains the function details
name
tells us which function to callarguments
is a JSON string containing the function’s input parameters.For more information about how PromptLayer structures tool calls, please refer to schema definition towards end of this page.
After executing the requested function, you can provide the result back to the assistant using a “tool” role message. The response should be structured JSON data:
Here is an example of how to log a request with tool calls and responses using OpenAI:
PromptLayer supports any number of modalities in a single prompt. You can include text, images, videos, and other media types in your prompt templates.
The media_variable
content allows you to dynamically insert a list of medias into prompt template messages.
The media_variable
is nested within the message content. The type
and name
are required fields specifying the type of content and the name of the variable, respectively. The name
is the name of the list of medias to be dynamically inserted.
When defining a prompt template, you can specify an media_variable
to dynamically include medias in your messages.
media
is a list of strings, they can either be public URLs or base64 strings.Prompt Blueprints can be configured to produce structured outputs that follow a specific format defined by JSON Schema. This ensures consistent response formats that are easier to parse and integrate with your applications.
For detailed information on creating and using structured outputs with your prompt templates, see our Structured Outputs documentation.
The schema is of type object
.