gpt-4-vision
?gpt-4-vision-preview
. They are used in a similar way to normal LLMs.
To use gpt-4-vision-preview
with PromptLayer, follow these steps:
gpt-4-vision-preview
with the necessary image inputs, either through image URLs or base64 encoded images.prompt_blueprint
support in streaming responses, providing both raw streaming data and progressively built structured responses.
When streaming is enabled, each chunk includes:
raw_response
: The raw streaming response from the LLM providerprompt_blueprint
: The progressively built prompt blueprint showing the current state of the responserequest_id
: Only included in the final chunk to indicate completionPromptLayerCallbackHandler
. Streaming is not supported through the PromptLayer-specific LLMs (the old way to use LangChain).
Finally, if you are interacting with PromptLayer through our REST API you will need to store the whole output and log it to PromptLayer (log-request
) only after it is finished.
{var}
). If your prompt includes JSON, this will cause issues. We recommend switching to “jinja2” string parsing ({{var}}
) to avoid such issues.
To switch input variable string parsers, navigate to the prompt template in the Prompt Registry. Then, click “Edit”. In the editor, on the top right, you will see a dropdown that allows you to switch between “f-string” and “jinja2”. For more details on using template variables effectively, see our Template Variables documentation.
grok
through custom base URLs. To set up Grok with PromptLayer:
https://api.x.ai/v1
https://api.deepseek.com
as the base URL. You can then use models like deepseek-chat
and deepseek-reasoner
in the Playground and Prompt Registry.