PromptLayer home page
Search...
⌘K
Ask AI
Contact Us
Search...
Navigation
Get Started
Tutorial Videos
Get Started
Welcome to PromptLayer
Quickstart
Quickstart - Pt 2
Tutorial Videos
Migration Guide
Languages & Environments
Python
JavaScript
LangChain
REST API
Integrations
Usage Documentation
Sharing Requests
Prompt Registry
Running Requests
Advanced Logging
Custom Providers
Deployment Strategies
FAQ
Why PromptLayer?
Prompt Management
Advanced Search
A/B Testing
Evaluations
Agents
Fine-Tuning
Analytics
Scoring & Ranking Prompts
Playground
Shared Workspaces
How PromptLayer Works
Reference
REST API Reference
POST
Get Prompt Template
POST
Publish Prompt Template
GET
Get Prompt Template Labels
PATCH
Move Prompt Template Labels
DEL
Delete a Prompt Template Label
POST
Create a Prompt Template Label
POST
Track Score
POST
Track Prompt
POST
Track Group
POST
Track Metadata
GET
Get All Prompt Templates
GET
List Datasets
POST
Create Dataset Group
POST
Create Dataset Version from File
POST
Create Dataset Version from Request History
POST
Create Evaluation Pipeline
POST
Run Full Evaluation
GET
Get Evaluation
GET
Get Evaluation Score
DEL
Delete Reports by Name
POST
Log Request
GET
Get Agent Version Execution Results
GET
Get All Workflows / Agents
POST
Run Agent
POST
Create Spans Bulk
On this page
Prompt Management
Creating Your First Prompt
Testing in the Playground
Deploying Prompts to Production
Evaluation & Testing
Building Your First Evaluation
Evaluating Prompts with LLM-as-Judge
Testing With Production Data
Conversation Simulation Evals
Agents & Workflows
Building Multi-Step Agents
Model Selection & Optimization
Choosing the Best AI Model
A/B Testing for Prompts
Additional Resources
Get Started
Tutorial Videos
These tutorial videos will walk you through key features and help you get up and running quickly.
Prompt Management
Creating Your First Prompt
Learn how to create structured templates with variables that you can reuse across your applications.
Testing in the Playground
Quickly test prompt changes, try different models, or experiment with new inputs.
Deploying Prompts to Production
Learn how to safely deploy prompt versions to production and staging environments.
Evaluation & Testing
Building Your First Evaluation
Create evaluations that are use-case specific and help prevent prompt regression.
Evaluating Prompts with LLM-as-Judge
Use language models to evaluate outputs based on criteria like accuracy, helpfulness, and relevance.
Testing With Production Data
Build comprehensive test sets from your historical data for effective evaluation.
Conversation Simulation Evals
Learn how to evaluate conversational AI systems using simulated user interactions.
Agents & Workflows
Building Multi-Step Agents
Chain multiple LLM calls together to tackle complex problems.
Model Selection & Optimization
Choosing the Best AI Model
Learn how to test your use cases across multiple models to find the best fit.
A/B Testing for Prompts
Systematically compare different prompt versions to make data-driven decisions.
Additional Resources
For more in-depth information, check out our comprehensive documentation:
Prompt Management Guide
Evaluation Guide
Agentic Workflows Guide
Observability Guide
Was this page helpful?
Yes
No
Quickstart - Pt 2
Migration Guide
Assistant
Responses are generated using AI and may contain mistakes.