Every PromptLayer log has a unique PromptLayer Request ID (pl_id).

All tracking in PromptLayer is based on the pl_request_id. This identifier is needed to enrich logs with metadata, scores, associated prompt templates, and more.

You can quickly grab a request ID from the web UI as shown below.

Specific instructions for retrieving the ID programmatically are below.

REST API

The pl_request_id is returned as request_id in the case of a successful request when using the REST api with /track-request. This means that request_id will be a key in the object returned by a successful track-request response.

Learn more

Python

To retrieve the pl_request_id using OpenAI in Python (through promptlayer_client.openai.OpenAI), set the argument return_pl_id to true in your function call. This will change the call to now return the OpenAI response and the pl_request_id. If you are using stream=true, then only the last pl_request_id will be the id; otherwise, it will be None.

The same is true for Anthropic.

from promptlayer import PromptLayer
promptlayer_client = PromptLayer()

OpenAI = promptlayer_client.openai.OpenAI
client = OpenAI()

response, pl_request_id = client.Completion.create(
    engine="gpt-3.5-turbo-instruct", 
    prompt="hi my name is ", 
    return_pl_id=True
)

Javascript

To retrieve the pl_request_id using OpenAI in JavaScript (through promptLayerClient.OpenAI), set the argument return_pl_id to true in your function call. This will change the call to now return the OpenAI response and the pl_request_id. If you are using stream: true, then only the last pl_request_id will be the id; otherwise, it will be undefined.

The same is true for Anthropic.

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer();

const OpenAI = promptLayerClient.OpenAI;
const client = new OpenAI();

const [response, pl_request_id] = await client.completions.create({
    model: "gpt-3.5-turbo-instruct",
    prompt: "hi my name is ",
    return_pl_id: true
});

LangChain

With Callback (newer)

For LangChain PromptLayer integration, define a pl_id_callback callback inside the PromptLayerCallbackHandler. This takes in the pl_request_id and can be used accordingly. Here’s an example:

def pl_id_callback(pl_request_id):
    print(pl_request_id)

llm = OpenAI(
    model_name="gpt-3.5-turbo-instruct",
    callbacks=[
        PromptLayerCallbackHandler(
            pl_id_callback=pl_id_callback, 
            pl_tags=["langchain"]
        )
    ],
)

llm("How tall are you?")

With Specific Models (older)

For LangChain PromptLayer integration, set the argument return_pl_id to true when instantiating your model. This will add the PromptLayer request ID in the generation_info field of the Generation returned when using .generate or .agenerate. Here’s an example:

from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)

llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
    print("pl request id: ", res[0].generation_info["pl_request_id"])