run()
method is a core function of the PromptLayer SDK, allowing you to execute prompts and interact with various LLM providers using a unified interface.
Basic Usage
Disclaimer
Note: For any LLM provider you plan to use, you must set its corresponding API key as an environment variable (for example,OPENAI_API_KEY
,ANTHROPIC_API_KEY
,GOOGLE_API_KEY
etc.).
The PromptLayer client does not support passing these keys directly in code. If the relevant environment variables are not set, any requests to those LLM providers will fail.
For using Gemini models through Vertex AI with pl.run: Python SDK: Set these environment variables:
GOOGLE_GENAI_USE_VERTEXAI=true
GOOGLE_CLOUD_PROJECT="<google_cloud_project_id>"
GOOGLE_CLOUD_LOCATION="region"
GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"
JavaScript SDK: Set these environment variables:
VERTEX_AI_PROJECT_ID="<google_cloud_project_id>"
VERTEX_AI_PROJECT_LOCATION="region"
GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"
For using Claude models through Vertex AI with pl.run: Python SDK: Set these environment variables:
ANTHROPIC_VERTEX_PROJECT_ID="<google_cloud_project_id>"
CLOUD_ML_REGION="region"
GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"
JavaScript SDK: Set these environment variables:
GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"
CLOUD_ML_REGION="region"
Parameters
prompt_name
/promptName
(str, required): The name of the prompt to run.prompt_version
/promptVersion
(int, optional): Specific version of the prompt to use.prompt_release_label
/promptReleaseLabel
(str, optional): Release label of the prompt (e.g., “prod”, “staging”).input_variables
/inputVariables
(Dict[str, Any], optional): Variables to be inserted into the prompt template.tags
(List[str], optional): Tags to associate with this run.metadata
(Dict[str, str], optional): Additional metadata for the run.group_id
/groupId
(int, optional): Group ID to associate with this run.model_parameter_overrides
/modelParameterOverrides
(Union[Dict[str, Any], None], optional): Model-specific parameter overrides.stream
(bool, default=False): Whether to stream the response.provider
(str, optional): The LLM provider to use (e.g., “openai”, “anthropic”, “google”). This is useful if you want to override the provider specified in the prompt template.model
(str, optional): The model to use (e.g., “gpt-4o”, “claude-3-7-sonnet-latest”, “gemini-2.5-flash”). This is useful if you want to override the model specified in the prompt template.
Return Value
The method returns a dictionary (Python) or object (JavaScript) with the following keys:request_id
: Unique identifier for the request.raw_response
: The raw response from the LLM provider.prompt_blueprint
: The prompt blueprint used for the request.
Advanced Usage
Streaming
To stream the response:prompt_blueprint
, allowing you to track how the response is constructed in real-time. The request_id
is only included in the final chunk.
Using Different Versions or Release Labels
Adding Tags and Metadata
Overriding Model Parameters
You can also overrideprovider
and model
at runtime to choose a different LLM provider or model. This is useful if you want to use a different provider than the one specified in the prompt template. PromptLayer will automatically return the corrent llm_kwargs
for the specified provider and model with default values for the parameters corresponding to the provider
and model
.
Make sure to set both
model
and provider
in order to run the request against correct LLM provider with correct parameters.