These tutorial videos will walk you through key features and help you get up and running quickly.

Prompt Management

Creating Your First Prompt

Learn how to create structured templates with variables that you can reuse across your applications.

Testing in the Playground

Quickly test prompt changes, try different models, or experiment with new inputs.

Deploying Prompts to Production

Learn how to safely deploy prompt versions to production and staging environments.

Evaluation & Testing

Building Your First Evaluation

Create evaluations that are use-case specific and help prevent prompt regression.

Evaluating Prompts with LLM-as-Judge

Use language models to evaluate outputs based on criteria like accuracy, helpfulness, and relevance.

Testing With Production Data

Build comprehensive test sets from your historical data for effective evaluation.

Agents & Workflows

Building Multi-Step Agents

Chain multiple LLM calls together to tackle complex problems.

Model Selection & Optimization

Choosing the Best AI Model

Learn how to test your use cases across multiple models to find the best fit.

A/B Testing for Prompts

Systematically compare different prompt versions to make data-driven decisions.

Additional Resources

For more in-depth information, check out our comprehensive documentation: