In this tutorial, we will guide you through the process of data-driven prompt engineering.

PromptLayer provides a unified interface for working with different language models, making and scoring requests, and tracking your prompts and requests. It supports a variety of models, from OpenAI, Anthropic, HuggingFace, and more.

Whether you’re a data scientist, a machine learning engineer, or a developer, PromptLayer can help you manage your language models more effectively and efficiently. Let’s get started!

Setting Up Your Environment

Before we get started, we need to load our environment variables from a .env file. This file should contain your API keys for PromptLayer and OpenAI.

Get your PromptLayer API key through our dashboard by signing up at www.promptlayer.com

.env
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
PROMPTLAYER_API_KEY=<YOUR_PROMPTLAYER_API_KEY>

We can load these variables using the dotenv package:

from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv('.env')

Once we have loaded our environment variables, we can import promptlayer and set up our API key for PromptLayer. Make sure to pip install promptlayer first if you are using Python.

from promptlayer import PromptLayer
import os
promptlayer_client = PromptLayer(api_key=os.environ.get("PROMPTLAYER_API_KEY"))

Making Your First Request

PromptLayer is at its core a REST library. Using our Python SDK is equivalent to directly making requests to our API, just a little easier.

Because latency is so important, the best way to use PromptLayer is to first make your request to OpenAI and then log the request to PromptLayer. This is how our Python SDK works under the hood.

If you are used to working with the Python SDK, all you will need to do is swap out your import openai for openai = promptlayer_client.openai. The rest of your code stays the same!

In this step we’ll make a simple request to the OpenAI GPT-3 engine to generate a response for the prompt “My name is”.

# Instead of `import openai` we will use
OpenAI = promptlayer_client.openai.OpenAI
openai = OpenAI()

# Make a completion to OpenAI
response = openai.completions.create(
  model="gpt-3.5-turbo-instruct", 
  prompt="My name is",
)
print(response.choices[0].text)

The response you’ll see should be a continuation of the prompt, such as “John. Nice to meet you, John.”

Refresh the dashboard and voilà! ✨ Request log screenshot

Enriching Requests

Enriching requests often requires a PromptLayer request ID. All PromptLayer requests have unique IDs, and these can be optionally returned when logging the request.

Tagging a Request

We can also add tags (pl_tags) to our requests to make it easier to search through and organize requests on the dashboard.

Learn more about tags.

response = openai.Completion.create(
  engine="gpt-3.5-turbo-instruct", 
  prompt="My name is",
  pl_tags=["getting_started_example"] # 🍰 PromptLayer tags
)

print(response.choices[0].text)

Filter by tags on the dashboard as seen below. Tags filtering screenshot

Scoring a Request

Using PromptLayer we can score a request with an integer 0 - 100. This is most often used to understand how effective certain prompts are in production.

Users use scores in many ways (learn more). Below are some examples:

  • 100 if the generated code compiles, 0 if not
  • 100 if the user denotes a thumbs-up, 0 for thumbs-down
  • LLM synthetic evaluation of how much the output matched the prompt

Here, we ask for the capital of New York, and then score the response based on whether it contains the correct answer.

To set the score, we make a second request to the PromptLayer API with the request_id and a score.

response, pl_request_id = openai.Completion.create(
  engine="gpt-3.5-turbo-instruct", 
  prompt="What is the capital of New York? \\n\\nThe capital of New York is",
  pl_tags=["getting_started_example"],
  return_pl_id=True # Make sure to set this to True
)

answer = response.choices[0].text
print(answer)
correct_answer = "albany" in answer.lower()

# Log score to 🍰 PromptLayer
promptlayer_client.track.score(
    request_id=pl_request_id,
    score=100 if correct_answer else 0,
)

Scores can also be set visually in the dashboard. Scoring screenshot

Adding Metadata

We can add metadata to a request to store additional information about it. Metadata is a map of string keys to string values.

Metadata is used to associate requests with specific users, track rollouts, and to store things like error messages (maybe from generated code). You can then filter requests & analytics on the PromptLayer dashboard using metadata keys.

Learn more about metadata.

Here, we make a request to rate how much a person would enjoy a city based on their interests, and then add metadata such as the user’s ID and location:

prompt_template = """You are an AI assistant that helps travelers pick a city to travel to. 
You do this by rating how much a person would enjoy a city based on their interests.
Given a city and interests, you respond with an integer 1-10 where 10 is the most enjoyment and 0 is the least.

Sample city: New York City
Sample interests: food, museums, hiking
Sample answer: 8

City: {city}
Interests: {interests}
Answer: """

response, pl_request_id = openai.Completion.create(
  engine="gpt-3.5-turbo-instruct", 
  prompt=prompt_template.format(city="Washington, D.C.", interests="resorts, museums, beaches"),
  pl_tags=["getting_started_example"],
  return_pl_id=True
)

answer = response.choices[0].text
print(answer)

# Let's convert the answer to an int
numeric_answer = None
error_message = None
try:
    numeric_answer = int(answer.strip())
except ValueError as e:
    error_message = str(e)
    pass

# Use score in 🍰 PromptLayer to track if answer was an int
promptlayer_client.track.score(
    request_id=pl_request_id,
    score=100 if numeric_answer else 0,
)

print("Numeric answer:", numeric_answer)

# Log metadata for request in 🍰 PromptLayer
promptlayer_client.track.metadata(
    request_id=pl_request_id,
    metadata={
        "referrer": "getting_started.ipynb",
        "origin": "NYC, USA",
        "user_id": "sdf328",
        "error_message": "No error" if numeric_answer else error_message,
    }
)

Now that you have successfully tagged requests with tags & metadata, you can use these features to better sort through requests in the dashboard.

The Analytics page shows high-level graphs and statistics about your usage. You can use metadata keys or tags to filter analytics.

You can also take advantage of our advanced search by using metadata to search in the sidebar.

Prompt Templates

Creating a Prompt in the Registry

We can create a prompt in the PromptLayer Prompt Registry. This allows us to easily reuse this prompt in the future:

After creating a prompt template, we can retrieve it programmatically using the API. The Prompt Registry is often used as a prompt template CMS to avoid blocking prompt changes on eng rollouts.

The Prompt Registry handles versioning, just visually edit the prompt in the dashboard to save a new version. As you can see below, we can retrieve the latest prompt or a specific stable version.

city_choice_prompt = promptlayer_client.templates.get("city_choice")
city_choice_prompt_v1 = promptlayer_client.templates.get("city_choice", { "version": 1 })
city_choice_prompt_prod = promptlayer_client.templates.get("city_choice", { "label": "prod" })
print(city_choice_prompt_v1['prompt_template'])

Linking a Prompt to a Request

The Prompt Registry becomes the most useful when you start linking requests with prompt template versions. This makes it easy to compare prompt templates across latency, cost, and quality. It also let’s you easily understand the input variables and how they change.

Once a prompt is in the registry, we can link it to a request (learn more):

input_variables = {
    "city": "Washington, D.C.", 
    "interests": "resorts, museums, beaches"
}

# Grab the prompt template
city_choice_prompt = promptlayer_client.templates.get("city_choice",
{
  "provider": "openai",
  "input_variables": input_variables
})

response, pl_request_id = client.completions.create(
  **city_choice_prompt['llm_kwargs'],
  model="gpt-3.5-turbo-instruct", 
  pl_tags=["getting_started_example"],
  return_pl_id=True
)
print("Answer:", response.choices[0].text)

# Associate the request with the prompt template we used
promptlayer_client.track.prompt(request_id=pl_request_id, 
    prompt_name='city_choice', prompt_input_variables=input_variables)

Prompt template evaluation

Now that you have created multiple versions of a prompt template and associated it with request logs, navigate back to the Prompt Registry to find statistics about each version.

PromptLayer lets you compare prompt templates across score, latency, and cost. You can also easily see which requests used which templates. Prompt template stats

Using Different Models

In addition to those provided by OpenAI, PromptLayer supports many other providers and model types.

Chat-GPT

Here’s an example of how to use a chat model from OpenAI:

openai = promptlayer_client.openai

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant that helps people bake."},
        {"role": "user", "content": "How do you make a layer cake?"}
    ],
)
print(response.choices[0].message.content)

Anthropic

We can also use models from Anthropic natively with the PromptLayer Python SDK:

# Swap out 'from anthropic import Anthropic'
Anthropic = promptlayer_client.anthropic.Anthropic

anthropic = Anthropic()
completion = anthropic.completions.create(
    model="claude-2",
    max_tokens_to_sample=300,
    prompt=f"{anthropic.HUMAN_PROMPT} Compose a poem please.{anthropic.AI_PROMPT}",
    pl_tags=["getting-started"]
)
print(completion.completion)

HuggingFaceHub

Here’s an example of using the Falcon-7b model from HuggingFaceHub. By using LangChain with the PromptLayerCallbackHandler, you can access tons of LLMs. Learn more.

from langchain.callbacks import PromptLayerCallbackHandler
from langchain import HuggingFaceHub

falcon = "tiiuae/falcon-7b-instruct"

llm = HuggingFaceHub(
    repo_id=falcon, 
    huggingfacehub_api_token=os.environ.get("HUGGING_FACE_API_KEY"), 
    model_kwargs={"temperature": 1.0, "max_length": 64}, 
    callbacks=[PromptLayerCallbackHandler(pl_tags=["falcon-7b"])]
)
request = llm("How do you make a layer cake?")
print(request)

And that’s it! With this tutorial, you should now be able to use PromptLayer to work with different language models, make and score requests, and track your prompts and requests. Enjoy using PromptLayer!