The run() method is a core function of the PromptLayer SDK, allowing you to execute prompts and interact with various LLM providers using a unified interface.

Basic Usage

Disclaimer

Note: For any LLM provider you plan to use, you must set its corresponding API key as an environment variable (for example, OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY etc.).
The PromptLayer client does not support passing these keys directly in code. If the relevant environment variables are not set, any requests to those LLM providers will fail.

from promptlayer import PromptLayer

pl = PromptLayer(api_key="your_api_key")

response = pl.run(
    prompt_name="your-prompt-name",
    input_variables={"variable_name": "value"}
)

print(response["prompt_blueprint"]["prompt_template"]["messages"][-1]["content"][-1]["text"])

Parameters

  • prompt_name / promptName (str, required): The name of the prompt to run.
  • prompt_version / promptVersion (int, optional): Specific version of the prompt to use.
  • prompt_release_label / promptReleaseLabel (str, optional): Release label of the prompt (e.g., “prod”, “staging”).
  • input_variables / inputVariables (Dict[str, Any], optional): Variables to be inserted into the prompt template.
  • tags (List[str], optional): Tags to associate with this run.
  • metadata (Dict[str, str], optional): Additional metadata for the run.
  • group_id / groupId (int, optional): Group ID to associate with this run.
  • model_parameter_overrides / modelParameterOverrides (Union[Dict[str, Any], None], optional): Model-specific parameter overrides.
  • stream (bool, default=False): Whether to stream the response.

Return Value

The method returns a dictionary (Python) or object (JavaScript) with the following keys:

  • request_id: Unique identifier for the request.
  • raw_response: The raw response from the LLM provider.
  • prompt_blueprint: The prompt blueprint used for the request.

Advanced Usage

Streaming

To stream the response:

for chunk in pl.run(prompt_name="your-prompt", stream=True):
    print(chunk.content)

Using Different Versions or Release Labels

response = pl.run(
    prompt_name="your-prompt",
    prompt_version=2,  # or
    prompt_release_label="staging"
)

Adding Tags and Metadata

response = pl.run(
    prompt_name="your-prompt",
    tags=["test", "experiment"],
    metadata={"user_id": "12345"}
)

Overriding Model Parameters

You can also override provider and model at runtime to choose a different LLM provider or model. This is useful if you want to use a different provider than the one specified in the prompt template. PromptLayer will automatically return the corrent llm_kwargs for the specified provider and model with default values for the parameters corresponding to the provider and model.

response = pl.run(
    prompt_name="your-prompt",
    provider="openai",  # or "anthropic", "google", etc.
    model="gpt-4",  # or "claude-2", "gemini-1.5-pro", etc.
)

Make sure to set both model and provider in order to run the request against correct LLM provider with correct parameters.