Frequently Asked Questions

Don’t see your question here? Send a message in Discord or email us at hello@promptlayer.com

Does PromptLayer support multi-modal image models like gpt-4-vision?

Yes, PromptLayer supports multi-modal image models, including gpt-4-vision-preview. They are used in a similar way to normal LLMs.

To use gpt-4-vision-preview with PromptLayer, follow these steps:

  1. Ensure you have the PromptLayer and OpenAI Python libraries installed.
  2. Replace the standard OpenAI import with the PromptLayer SDK client (or use the REST API).
  3. Make your request to gpt-4-vision-preview with the necessary image inputs, either through image URLs or base64 encoded images.
  4. Check the PromptLayer dashboard to see your request logged!

Multi-modal models are also supported in the Prompt Registry, Playground, and Evaluations pages.

Do you support OpenAI function calling?

Yes, we take great pride in staying up to date. PromptLayer is 1-to-1 with OpenAI’s library. That means, if you are using PromptLayer+OpenAI through the Python libraries, function calling will be implicitly supported.

If you are using our REST library, track-request mirrors OpenAI’s request schema. You can add tools into kwargs and use function-type messages as you would use normal messages in gpt-4.

Does PromptLayer support streaming?

Streaming requests are supported on the PromptLayer Python SDK (both with OpenAI and Antrhopic).

If you are using LangChain, streaming is only supported when you use the PromptLayerCallbackHandler. Streaming is not supported through the PromptLayer-specific LLMs (the old way to use LangChain).

Finally, if you are interacting with PromptLayer through our REST API you will need to store the whole output and log it to PromptLayer (track-request) only after it is finished.

I’m having trouble with the LangChain integration.

Try updating both LangChain and PromptLayer to their most recent versions.

Can I export my data from PromptLayer?

Yes. You can export your usage data with the button shown below.

Filter your training data export by tags, a search query, or metadata.

Do you support on-premises deployment?

Yes, we do support on-premises deployment for a select few of our enterprise customers. However, we are rolling out this option slowly.

If you are interested in onprem, please contact us for more information.

Does AsyncOpenAI work with PromptLayer?

Yes, AsyncOpenAI is compatible with PromptLayer. Use them together as demonstrated in example below.

from promptlayer import PromptLayer 
promptlayer_client = PromptLayer(api_key="pl_*****")

import asyncio
# from openai import AsyncOpenAI
AsyncOpenAI = promptlayer_client.openai.AsyncOpenAI

client = AsyncOpenAI(
    api_key="sk-***",
)


async def main() -> None:
    chat_completion = await client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": "Say this is a test",
            }
        ],
        model="gpt-3.5-turbo",
    )
    print(chat_completion)

asyncio.run(main())

Is PromptLayer SOC 2 certified?

Yes, we have achieved SOC 2 Type 2 certification. Please contact us for the report.

Why doesn’t my evaluation report use the newest version of my prompt?

To ensure your evaluation report reflects the newest version of your prompt template, you must configure your evaluation pipeline to use the “latest” version of the prompt template in its column step. The template is fetched at runtime, and specifying a frozen version will result in the evaluation report not reflecting your newest prompt template.

What model providers do you support on your evaluations page?

While you can log LLM requests from any model and our Prompt Registry is agnostic, our evaluations & playground requests support OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, Bedrock, Mistral, and Cohere.

Do you support open source models?

PromptLayer provides out-of-the-box support for Mistral in our logs, playground, Prompt Registry, and evals. You can also connect your own models to the logs & registry.

What’s the difference between tags and metadata?

Both tags and metadata enable the addition of supplementary information to your request logs, yet they serve distinct purposes. Tags are ideal for classifying requests into a limited number of predefined categories, such as “prod” or “dev”. Conversely, metadata is tailored for capturing unique, request-specific details like user IDs or session IDs.

Why do I see extra input variables in my prompt template? Parsing does not seem to be working.

If you see extra input variables in the Prompt Registry or when creating an evaluation, it is likely due to string parsing errors. By default every prompt template uses “f-string” string parsing ({var}). If your prompt includes JSON, this will cause issues. We recommend switching to “jinja2” string parsing ({{var}}) to avoid such issues.

To switch input variable string parsers, navigate to the prompt template in the Prompt Regsitry. Then, click “Edit”. In the editor, on the top right, you will see a dropdown that allows you to switch between “f-string” and “jinja2”.

How do I inject multiple messages into my prompt template?

You can use placeholders, built just for that!

Does PromptLayer support self-hosted models or custom base URLs?

Yes, PromptLayer supports using your own self-hosted models, those from providers like HuggingFace, or Azure OpenAI. To use a custom base URL:

  1. Go to your workspace settings
  2. Scroll to “Provider Base URLs”
  3. Add the base URL for your model provider