The run() method is a core function of the PromptLayer SDK, allowing you to execute prompts and interact with various LLM providers using a unified interface.

Basic Usage

Disclaimer

Note: For any LLM provider you plan to use, you must set its corresponding API key as an environment variable (for example, OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.).
The PromptLayer client does not support passing these keys directly in code. If the relevant environment variables are not set, any requests to those LLM providers will fail.

from promptlayer import PromptLayer

pl = PromptLayer(api_key="your_api_key")

response = pl.run(
    prompt_name="your-prompt-name",
    input_variables={"variable_name": "value"}
)

print(response["prompt_blueprint"]["prompt_template"]["messages"][-1]["content"][0]["text"])

Parameters

  • prompt_name / promptName (str, required): The name of the prompt to run.
  • prompt_version / promptVersion (int, optional): Specific version of the prompt to use.
  • prompt_release_label / promptReleaseLabel (str, optional): Release label of the prompt (e.g., “prod”, “staging”).
  • input_variables / inputVariables (Dict[str, Any], optional): Variables to be inserted into the prompt template.
  • tags (List[str], optional): Tags to associate with this run.
  • metadata (Dict[str, str], optional): Additional metadata for the run.
  • group_id / groupId (int, optional): Group ID to associate with this run.
  • model_parameter_overrides / modelParameterOverrides (Union[Dict[str, Any], None], optional): Model-specific parameter overrides.
  • stream (bool, default=False): Whether to stream the response.

Return Value

The method returns a dictionary (Python) or object (JavaScript) with the following keys:

  • request_id: Unique identifier for the request.
  • raw_response: The raw response from the LLM provider.
  • prompt_blueprint: The prompt blueprint used for the request.

Advanced Usage

Streaming

To stream the response:

for chunk in pl.run(prompt_name="your-prompt", stream=True):
    print(chunk.content)

Using Different Versions or Release Labels

response = pl.run(
    prompt_name="your-prompt",
    prompt_version=2,  # or
    prompt_release_label="staging"
)

Adding Tags and Metadata

response = pl.run(
    prompt_name="your-prompt",
    tags=["test", "experiment"],
    metadata={"user_id": "12345"}
)