Traces
Traces are a powerful feature in PromptLayer that allow you to monitor and analyze the execution flow of your applications, including LLM requests. Built on OpenTelemetry, Traces provide detailed insights into function calls, their durations, inputs, and outputs.
Overview
Traces in PromptLayer offer a comprehensive view of your application’s performance and behavior. They allow you to:
- Visualize the execution flow of your functions
- Track LLM requests and their associated metadata
- Measure function durations and identify performance bottlenecks
- Inspect function inputs and outputs for debugging
Note: The left menu in the PromptLayer UI only shows root spans, which represent the entry function of your program. While your program is running, you might not see all spans in the UI immediately, even though child spans are being sent to the backend. The root span, along with all its child spans, will only appear in the UI once the program completes. This behavior is particularly noticeable in long-running programs or those with complex execution flows.
Automatic LLM Request Tracing
When you initialize the PromptLayer class with enable_tracing
set to True
, PromptLayer will automatically track any LLM calls made using the PromptLayer library. This allows you to capture detailed information about your LLM requests, including:
- Model used
- Input prompts
- Generated responses
- Request duration
- Associated metadata
Once PromptLayer is initialized with tracing enabled, you can use the run()
method to execute prompts. All LLM calls made through this method will be automatically traced, providing detailed insights into your prompt executions.
Custom Function Tracing
In addition to automatic LLM request tracing, you can also use the traceable
decorator (for Python) or wrapWithSpan
(for JavaScript) to explicitly track span data for additional functions. This allows you to gather detailed information about function executions.
Was this page helpful?