Does PromptLayer support multi-modal image models like gpt-4-vision?
Yes, PromptLayer supports multi-modal image models, including gpt-4-vision-preview. They are used in a similar way to normal LLMs.
To use gpt-4-vision-preview with PromptLayer, follow these steps:
- Ensure you have the PromptLayer and OpenAI Python libraries installed.
- Use the
run()method to execute prompts, or uselog_requestto log requests made with your own client. - Make your request to
gpt-4-vision-previewwith the necessary image inputs, either through image URLs or base64 encoded images. - Check the PromptLayer dashboard to see your request logged!

Do you support OpenAI function calling?
Yes, we take great pride in staying up to date. PromptLayer supports function calling through therun() method and via custom logging. You can also configure tool calling directly in the Prompt Registry.
Does PromptLayer support streaming?
Yes, streaming requests are supported on the PromptLayer Python and JS SDK. PromptLayer now includesprompt_blueprint support in streaming responses, providing both raw streaming data and progressively built structured responses.
When streaming is enabled, each chunk includes:
raw_response: The raw streaming response from the LLM providerprompt_blueprint: The progressively built prompt blueprint showing the current state of the responserequest_id: Only included in the final chunk to indicate completion
The
raw_response structure is provider-specific and may change as LLM providers update their APIs. For stable, provider-agnostic access, use prompt_blueprint instead.log-request) only after it is finished.
Can I export my data from PromptLayer?
Yes. You can export your usage data with the button shown below. Filter your training data export by tags, a search query, or metadata.Do you support on-premises deployment?
Yes, we do support on-premises deployment for a select few of our enterprise customers. However, we are rolling out this option slowly. If you are interested in onprem, please contact us for more information.Does async work with PromptLayer?
Yes, PromptLayer supports asynchronous operations throughAsyncPromptLayer. You can use the async run() method or log_request to log requests made with async LLM clients.
Is PromptLayer SOC 2 certified?
Yes, we have achieved SOC 2 Type 2 certification. Please contact us for the report.Why doesn’t my evaluation report use the newest version of my prompt?
To ensure your evaluation report reflects the newest version of your prompt template, you must configure your evaluation pipeline to use the “latest” version of the prompt template in its column step. The template is fetched at runtime, and specifying a frozen version will result in the evaluation report not reflecting your newest prompt template.
What model providers do you support on your evaluations page?
While you can log LLM requests from any model and our Prompt Registry is agnostic, our evaluations & playground requests support OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, Bedrock, Mistral, and Cohere.Do you support open source models?
PromptLayer provides out-of-the-box support for Mistral in our logs, playground, Prompt Registry, and evals. You can also connect your own models to the logs & registry.What’s the difference between tags and metadata?
Both tags and metadata enable the addition of supplementary information to your request logs, yet they serve distinct purposes. Tags are ideal for classifying requests into a limited number of predefined categories, such as “prod” or “dev”. Conversely, metadata is tailored for capturing unique, request-specific details like user IDs or session IDs.Why do I see extra input variables in my prompt template? Parsing does not seem to be working.
If you see extra input variables in the Prompt Registry or when creating an evaluation, it is likely due to string parsing errors. By default every prompt template uses “f-string” string parsing ({var}). If your prompt includes JSON, this will cause issues. We recommend switching to “jinja2” string parsing ({{var}}) to avoid such issues.
To switch input variable string parsers, navigate to the prompt template in the Prompt Registry. Then, click “Edit”. In the editor, on the top right, you will see a dropdown that allows you to switch between “f-string” and “jinja2”. For more details on using template variables effectively, see our Template Variables documentation.
How do I inject multiple messages into my prompt template?
You can use placeholders, built just for that!Does PromptLayer support self-hosted models or custom base URLs?
Yes, PromptLayer supports using your own self-hosted models, those from providers like HuggingFace, or Azure OpenAI. To use a custom base URL:- Go to your workspace settings
- Scroll to “Provider Base URLs”
- Add the base URL for your model provider

Can I cancel my PromptLayer subscription?
Yes, you can cancel your subscription at any time. Your subscription will remain active until the end of the billing cycle. To cancel your subscription, go to your settings and click on billing portal.Does PromptLayer support Grok from xAI?
Yes, PromptLayer supports Grok models through custom providers. For detailed setup instructions and usage guidelines, see our xAI (Grok) integration guide.Does PromptLayer support Deepseek models?
Yes, PromptLayer supports Deepseek models through custom base URLs. Configure it in workspace settings under “Provider Base URLs” using OpenAI as the provider andhttps://api.deepseek.com as the base URL. You can then use models like deepseek-chat and deepseek-reasoner in the Playground and Prompt Registry.

