Learn how to use logging to monitor performance and optimize your prompts.
This module requires an existing prompt in your PromptLayer account. Please
follow the Getting Started guide to
create one if needed.
When buidling a prompt, observability becomes critical. For example, the ai-poet prompt generates a creative haiku based on a given topic, and enabling logging helps you monitor important performance details. For example, logging can reveal:
Execution Issues: Did the prompt return a reasonable output as expected?
Execution Time: How quickly the prompt is executed.
Token Usage: The number of tokens used during execution, which directly impacts cost.
Cost Metrics: Whether the prompt runs efficiently within your budget.
By reviewing these logs, you can determine if your ai-poet prompt is performing as expected and make adjustments if necessary—ensuring that your creative content is generated both efficiently and effectively.
Review your logs to troubleshoot issues and gather performance metrics.
Execute your prompt (via SDK or code).
Open the sidebar on the left side and click Requests tab to view log entries.
Click on the log entry to see execution time, cost, token usage, and more.
Use these insights to refine and optimize your prompt.
Use filters to search for specific requests, such as filtering by tags. In
this guide, we added the tag onboarding_guide to the request.
You can also open these logs in the Playground, share them with your team, and add them to a dataset to use them for refining and testing.Additional Resources: