Prompt Blueprints are central to PromptLayer’s architecture, enabling you to work seamlessly with multiple LLM providers in a single format. They abstract provider-specific details, allowing you to switch between LLMs without modifying your code.

Prompt Blueprint is a model-agnostic and standardized schema that PromptLayer uses to store prompts.

Accessing the Prompt Blueprint

Instead of accessing the raw LLM response via response["raw_response"], it’s recommended to use the standardized response["prompt_blueprint"]. This ensures consistency across different providers.

response = promptlayer_client.run(
    prompt_name="ai-poet",
    input_variables={'topic': 'food'},
)

print(response["prompt_blueprint"]["prompt_template"]["messages"][-1]["content"][0]["text"])

With this approach, you can update from one provider to another (e.g., OpenAI to Anthropic) without any code changes.

Placeholder Messages

Placeholder Messages are a powerful feature that allows you to inject messages into a prompt template at runtime. By using the placeholder role, you can define placeholders within your prompt template that can be replaced with full messages when the prompt is executed.

For more detailed information on Placeholder Messages, including how to create and use them, please refer to our dedicated Placeholder Messages Documentation page.

Running a Template with Placeholders

When running a prompt that includes placeholders, you need to supply the messages that will replace the placeholders in the input variables.

response = promptlayer_client.run(
    prompt_name="template-name",
    input_variables={
        "fill_in_message": [
            {
                "role": "user",
                "content": [{"type": "text", "text": "My age is 29"}],
            },
            {
                "role": "assistant",
                "content": [{"type": "text", "text": "What a wonderful age!"}],
            }
        ]
    },
)

Note: The messages provided must conform to the Prompt Blueprint format.

Prompt Blueprint Message Format

Each message in a Prompt Blueprint should be a dictionary with the following structure:

  • role: The role of the message sender (user, assistant, etc.).
  • content: A list of content items, where each item has:
    • type: The type of content (text, media, etc.).
    • text: The text content (if type is text).

Example Message

{
    "role": "user",
    "content": [{"type": "text", "text": "Hello, how are you?"}],
}

You’re absolutely right - let me update that part of the documentation to show a more realistic tool response format that uses structured JSON data. Here’s the revised section:

Tools and Function Calling

The Prompt Blueprint supports tool and function calling capabilities. This section demonstrates how to define available tools, handle assistant tool calls, and provide tool responses.

Defining Available Tools

When creating a prompt template, you can specify available tools under the tools field. Each tool definition follows this structure:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                }
            }
        }
    }
]


prompt_template = {
    "type": "chat",
    "messages": messages,
    "tools": tools
}

The parameters field is of interest because it specifies the expected input parameters for the function. The LLM provider will use this information to generate the appropriate tool call. You can define the parameters using JSON Schema format. You can read moe about how OpenAI uses JSON Schema for defining parameters here. And you can read more about how Anthropic uses JSON Schema for defining parameters here.

Assistant Tool Calls

When the assistant decides to use a tool, the response will include a tool_calls field in the message. The format is:

{
    "role": "assistant",
    "tool_calls": [
        {
            "id": "call_abc123",
            "type": "function",
            "function": {
                "name": "get_weather", 
                "arguments": "{\"location\": \"Paris\"}"
            }
        }
    ]
}
  • id is used by the assistant to track the tool call.
  • type is always function.
  • function contains the function details
    • name tells us which function to call
    • arguments is a JSON string containing the function’s input parameters.

For more information about how PromptLayer structures tool calls, please refer to schema definition towards end of this page.

Providing Tool Responses

After executing the requested function, you can provide the result back to the assistant using a “tool” role message. The response should be structured JSON data:

{
    "role": "tool",
    "content": [
        {
            "type": "text",
            "text": "{\"temperature\": 72, \"conditions\": \"sunny\", \"humidity\": 45}"
        }
    ],
    "tool_call_id": "call_abc123"
}

Here is an example of how to log a request with tool calls and responses using OpenAI:

from openai import OpenAI
client = OpenAI()
model = "gpt-4o"
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "parameters": {
                "type": "object",
                "properties": {"location": {"type": "string"}},
            },
        },
    }
]
messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "What's the weather like in Paris today?"}
        ],
    }
]
prompt_template = {
    "type": "chat",
    "messages": messages,
    "tools": tools,
}

request_start_time = time.time()
completion = client.chat.completions.create(
    model=model,
    messages=prompt_template["messages"],
    tools=prompt_template["tools"],
)
request_end_time = time.time()
print(completion.choices[0].message.tool_calls)

promptlayer.log_request(
    provider="openai",
    model=model,
    input=prompt_template,
    output={
        "type": "chat",
        "messages": [
            {
                "role": "assistant",
                "tool_calls": [
                    tool_call.model_dump()
                    for tool_call in completion.choices[0].message.tool_calls
                ],
            }
        ],
    },
    input_tokens=completion.usage.prompt_tokens,
    output_tokens=completion.usage.completion_tokens,
    request_start_time=request_start_time,
    request_end_time=request_end_time,
)

Multi-Modal Variables

PromptLayer supports any number of modalities in a single prompt. You can include text, images, videos, and other media types in your prompt templates.

The media_variable content allows you to dynamically insert a list of medias into prompt template messages.

The media_variable is nested within the message content. The type and name are required fields specifying the type of content and the name of the variable, respectively. The name is the name of the list of medias to be dynamically inserted.

{
    "role": "user",
    "content": [
        {
            "type": "media_variable",
            "name": "media"
        }
    ]
}

When defining a prompt template, you can specify an media_variable to dynamically include medias in your messages.

Running with Media Variables

response = pl_client.run(
    prompt_name="image-prompt",
    input_variables={
        "media": [
            "https://example.com/image1.jpg",
            "https://example.com/image2.jpg"
        ]
    },
)

print(response)
Notice that the media is a list of strings, they can either be public URLs or base64 strings.

Prompt Blueprint Schema

prompt_template
object
required
metadata
object | null