Provider Details
OpenAI
- Chat Completions and Responses API are both supported.
- Function/Tool Calling (including built-in Responses API tools: Web Search, File Search) — see Tool Calling.
- JSON mode / structured outputs.
- Vision models (e.g.,
gpt-4-vision) — see FAQ: multimodal. - Streaming via SDK (Python/JS).
- Tip: You can also connect via OpenRouter as a custom provider to access many OpenAI-compatible models.
OpenAI Azure
- Same usage as OpenAI but configured with Azure deployment settings.
- Ensure deployment name, API version, and resource URL are correctly configured.
- Most OpenAI features apply; some params differ per Azure config.
Anthropic
- Tool Use (Anthropic Messages API).
- Claude 3 family supports image inputs.
- Streaming via SDK.
- If you previously used “Anthropic Bedrock”, migrate to the native Anthropic provider or to Claude via Amazon Bedrock.
Google (Gemini)
- Multimodal support for images.
- Use either the direct Google provider or Vertex AI based on your infrastructure preference.
Vertex AI
- Gemini and Anthropic Claude served through Google Cloud’s Vertex AI.
- Configure project, region, and credentials as required by Vertex AI.
Amazon Bedrock
- Access Anthropic Claude and Meta Llama (and other families) via AWS Bedrock.
- Capabilities vary per model—some models may not support tools or certain parameters.
Mistral
- Streaming supported; tool/function-call support depends on the specific model.
Cohere
- Command/Command-R family supported.
- Feature availability (streaming, tool use) depends on the chosen model.
Hugging Face
- Evaluation support varies by model/task—text-generation models generally work best.
- For endpoints with OpenAI-compatible shims, you can configure via a custom base URL.
Anthropic Bedrock (Deprecated)
- This legacy integration is deprecated.
- Use the native Anthropic provider, or access Claude via Amazon Bedrock with the Bedrock provider.
OpenAI-compatible Base URL Providers
Many third‑party providers expose an OpenAI‑compatible API. You can connect any such provider by configuring a Provider Base URL that uses the OpenAI client. See Custom Providers for more details. How to set up:- Go to your workspace settings → “Provider Base URLs”.
- Click “Create New” and configure:
- LLM Provider: OpenAI
- Base URL: the provider’s endpoint (examples below)
- API Key: the provider’s key
- Optional: Create Custom Models for a cleaner model dropdown in the Playground/Prompt Registry.
- OpenRouter — Base URL: https://openrouter.ai/api/v1 (see example)
- Exa — Base URL: https://api.exa.ai (see integration guide)
- xAI (Grok) — Base URL: https://api.x.ai/v1 (see integration guide)
- DeepSeek — Base URL: https://api.deepseek.com (see FAQ)
- Hugging Face gateways that offer OpenAI-compatible endpoints — use the gateway URL provided by your deployment
- Works in Logs, Prompt Registry, and Playground.
- Evaluations: supported when the provider’s OpenAI-compat layer matches PromptLayer parameters; remove unsupported params if needed (e.g., some providers do not support “seed”).
- Tool/Function Calling and streaming availability depend on the provider/model.

