Prompt Registry Overview
The Prompt Registry allows you to easily manage your prompt templates, which are customizable prompt strings with placeholders for variables.
Specifically, a prompt template is your prompt string with variables indicated in curly brackets (This is a prompt by {author_name}
). Prompt templates can have tags and are uniquely named.
You can use this tool to programmatically retrieve and publish prompts (even at runtime!). That is, this registry makes it easy to start A/B testing your prompts. Viewed as a “Prompt Management System”, this registry allows your org to pull out and organize the prompts that are currently dispersed throughout your codebase.
Collaboration
The Prompt Registry is perfect for engineering teams looking to organize & track their many prompt templates.
… but the real power of the registry is collaboration. Engineering teams waste cycles deploying prompts and content teams are often blocked waiting on these deploys.
By programmatically pulling down prompt templates at runtime, product and content teams can visually update & deploy prompt templates without waiting on eng deploys.
We know quick feedback loops are important, but the Prompt Registry also makes those annoying last-minute prompt updates easy.
Comments
The Prompt Registry supports threaded comments on prompt versions, enabling collaborative discussions about edits and ideas. This feature facilitates communication between team members, allowing them to share feedback, suggest improvements, and document the reasoning behind prompt changes. By fostering collaboration through comments, teams can make more informed decisions and maintain a clear history of prompt evolution.
Getting a Template
The Prompt Registry is designed to be used at runtime to pull the latest prompt template for your request.
It’s simple.
Alternatively, use the REST API endpoint /prompte-templates/{prompt_name}
(read more).
By Release Label
Release labels like prod
and staging
can be optionally applied to template versions and used to retrieve the template.
By Version
You can also optionally pass version
to get an older version of a prompt. By default the newest version of a prompt is returned
Metadata
When fetching a prompt template, you can view your metadata
using the following code snippet:
Formatting
PromptLayer can format and convert your prompt to the correct LLM format. You can do this by passing the arguments provider
and input_variables
.
Currently we support provider
type of either openai
or anthropic
Publishing a Template
You can use the UI or API to create a template.
Templates are unique by name, which means that publishing a template with the same name will overwrite old templates.
String Formats
You may choose to select from one of the two supported template formats (f-string, jinja2
) to declare variables. (f-string
) allows you to declare variables using curly brackets ({variable_name}
) while (jinja2
) allows you to declare variables using double curly brackets ({{variable_name}}
).
Visually
To create a template using the UI, simply navigate to the Registry and click “Create Template”. This will allow you to create the template visually. You can also edit old templates from the UI.
Rename a template by triple-clicking on its name. It’s best practice to keep template names unique and lowercase.
Programmatically
While it’s easiest to publish prompt templates visually through the dashboard, some users prefer the programatic interface detailed below.
Release Labels
Prompt labels are a way to put a label on your prompt template to help you organize and search for them. This enables you to get a specific version of prompt using the label. You can add as many labels as you want to a prompt template with one restriction - the label must be unique across all versions. This means that you cannot have a label called prod
on both version 1 and version 2 of a prompt template. This restriction is in place to prevent confusion when searching for prompt templates.
You can also set release labels via the SDK
Commit Messages
Prompt commit messages allow you to set a brief 72 character-long description on each of your prompt template version to help you keep track of changes.
You can also retrieve the commit messages through code. You’ll see them when you list all templates.
Or set them, by specifing a commit_message arg to templates.publish, like this:
They are also available through the following REST API endpoints
/prompt-templates/{identifier}
(read more)
/rest/prompt-templates
(read more)
Metadata
Custom metadata can be associated with individual prompt template versions. This allows you to set values such as provider, model, temperature, or any other key-value pair for each prompt template version. Please note that the model
attribute is reserved for model parameters, avoid putting custom metadata here.
Custom non-model related metadata can be seen and edited through the Prompt Template Version edit page.
You can use our Python SDK to publish a prompt template with metadata. For example, here’s how you can set the model
metadata along with a custom category
metadata:
Alternatively, use the REST API endpoint /rest/prompt-templates
(read more).
You can also use Langchain to create a template, either by pulling it from LangchainHub, creating a custom template, or providing a Python dictionary directly.
Tracking templates
The power of the Prompt Registry comes from associating requests with the template they used. This allows you to track average score, latency, and cost of each prompt template. You can also see the individual requests using the template.
To associate a request with a prompt template and some input variables, use the code below with the pl_request_id for the request.
Alternatively, use the REST API endpoint /rest/track-prompt
(read more).
Learn more about tracking templates here
A/B Testing
The Prompt Registry, combined with our powerful A/B Releases feature, allows you to easily perform A/B tests on your prompts. This feature enables you to test different versions of your prompts in production, safely roll out updates, and segment users.
With A/B Releases, you can:
- Test new prompt versions with a subset of users before a full rollout
- Gradually release updates to minimize risk
- Segment users to receive specific versions (e.g., beta users, internal employees)
Here’s how it works:
- Create multiple versions of a prompt template in the Prompt Registry.
- Use Dynamic Release Labels to split traffic between different prompt versions based on percentages or user segments.
- Retrieve the appropriate prompt version at runtime using the
get
method with a release label.
CMS systems like the Prompt Registry are useful for A/B testing because they allow you to programmatically manage and publish different versions of a prompt template. By creating multiple versions of a prompt template, you can serve different prompts to different users and measure which version performs better.
Getting all Prompts Programmatically
To get all prompts from prompt registry you can use the following code snippet:
By default this returns 30 prompts. You can change this by passing in the per_page
argument. For example, to get 100 prompts you can do the following:
You can also define the page you want to get by passing in the page
argument. For example, to get the second page of prompts you can do the following:
It is important to note that prompt_template
represents the latest version of the prompt template.
Alternatively, use the REST API endpoint /prompt-templates
(read more).
Using Images as Input Variables
You can dynamically set images into your user prompt by setting an image input variable in your prompt template. This allows you to provide a list of input images to be used in your prompt.
To define an image input variable, simply click the attach icon in the prompt registry and enter a name for the variable in the resulting modal dialog.
You can then use this input variable in your prompt template as you would any other input variable. Keep in mind we expect the image input variable to be a list of image urls or base64 encoded images.
Was this page helpful?