Traces are a powerful feature in PromptLayer that allow you to monitor and analyze the execution flow of your applications, including LLM requests. Built on OpenTelemetry, Traces provide detailed insights into function calls, their durations, inputs, and outputs.

Overview

Traces in PromptLayer offer a comprehensive view of your application’s performance and behavior. They allow you to:

  • Visualize the execution flow of your functions
  • Track LLM requests and their associated metadata
  • Measure function durations and identify performance bottlenecks
  • Inspect function inputs and outputs for debugging

Note: The left menu in the PromptLayer UI only shows root spans, which represent the entry function of your program. While your program is running, you might not see all spans in the UI immediately, even though child spans are being sent to the backend. The root span, along with all its child spans, will only appear in the UI once the program completes. This behavior is particularly noticeable in long-running programs or those with complex execution flows.

Automatic LLM Request Tracing

When you initialize the PromptLayer class with enable_tracing set to True, PromptLayer will automatically track any LLM calls made using the PromptLayer library. This allows you to capture detailed information about your LLM requests, including:

  • Model used
  • Input prompts
  • Generated responses
  • Request duration
  • Associated metadata

Once PromptLayer is initialized with tracing enabled, you can use the run() method to execute prompts. All LLM calls made through this method will be automatically traced, providing detailed insights into your prompt executions.

Custom Function Tracing

In addition to automatic LLM request tracing, you can also use the traceable decorator (for Python) or wrapWithSpan (for JavaScript) to explicitly track span data for additional functions. This allows you to gather detailed information about function executions.

Setting Custom Span Names

When tracing functions, you may want to set custom names for your spans to make them more descriptive. Both Python and JavaScript implementations of PromptLayer allow you to set custom span names.

Python

In Python, you can set a custom span name by passing the name parameter to the traceable decorator:

@pl_client.traceable(name="CustomGreeting")
def greet(name):
    return f"Hello, {name}!"

result = greet("Alice")
print(result)

If you don’t provide a name parameter, the span will use the function’s name by default.

JavaScript

In JavaScript, you can set a custom span name by passing it as the first argument to the wrapWithSpan function:

JavaScript
const greet = promptlayer.wrapWithSpan('CustomGreeting', (name) => {
    return `Hello, ${name}!`;
});

const result = greet("Alice");
console.log(result);

If you want to use the function’s name as the span name, you can simply pass the function name as a string:

JavaScript
const greet = promptlayer.wrapWithSpan('greet', (name) => {
    return `Hello, ${name}!`;
});

Creating Parent Spans and Grouping Function Calls

To create a parent span and group multiple function calls within it, you can use the traceable decorator on a main function that calls other traced functions. Here’s an example that demonstrates this concept:

from promptlayer import PromptLayer

# Initialize PromptLayer with tracing enabled
pl_client = PromptLayer(enable_tracing=True)

@pl_client.traceable(name="custom-span")
def main():
    # This function will be the parent span
    openai_call()
    anthropic_call()
    custom_function()
    run_prompt()

@pl_client.traceable()
def openai_call():
    OpenAI = pl_client.openai.OpenAI
    openai = OpenAI()
    template = pl_client.templates.get("simple-greeting")
    response, pl_request_id = openai.chat.completions.create(
        **template["llm_kwargs"],
        return_pl_id=True
    )
    print("OpenAI response:", response.choices[0].message.content)

@pl_client.traceable()
def anthropic_call():
    anthropic = pl_client.anthropic.Anthropic()
    # Add your Anthropic API call here
    print("Anthropic call executed")

@pl_client.traceable()
def custom_function():
    # This is a custom function that will be traced
    result = "Custom function executed"
    print(result)
    return result

@pl_client.traceable()
def run_prompt():
    response = pl_client.run(
        prompt_name="simple-greeting",
        input_variables={}
    )
    print("Prompt response:", response["prompt_blueprint"]["prompt_template"]["messages"][-1])

if __name__ == "__main__":
    main()