Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt

Use this file to discover all available pages before exploring further.

JavaScript SDK

Official JavaScript/TypeScript SDK for interacting with the PromptLayer API from server-side runtimes.

Installation

npm install promptlayer
The easiest way to use PromptLayer is with the run() method. It fetches a prompt template from the Prompt Registry, executes it against your configured LLM provider, and logs the result — all in one call.
import { PromptLayer } from "promptlayer";

const promptLayerClient = new PromptLayer({
  apiKey: process.env.PROMPTLAYER_API_KEY,
});

const response = await promptLayerClient.run({
  promptName: "my-prompt",
  inputVariables: { topic: "poetry" },
  tags: ["getting-started"],
  metadata: { user_id: "123" }
});

console.log(response.prompt_blueprint.prompt_template.messages.slice(-1)[0].content);
Your LLM API keys (OpenAI, Anthropic, etc.) are never sent to our servers. All LLM requests are made locally from your machine, PromptLayer just logs the request.
The run() method works with any provider configured in your prompt template — OpenAI, Anthropic, Google, and more. See the Run documentation for full details. After making your first few requests, you should be able to see them in the PromptLayer dashboard!

Basic Usage

For any LLM provider you plan to use, you must set its corresponding API key as an environment variable (for example, OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY etc.). The PromptLayer client does not support passing these keys directly in code. If the relevant environment variables are not set, any requests to those LLM providers will fail.

Using Gemini models through Vertex AI

JavaScript SDK: Set these environment variables:
  • VERTEX_AI_PROJECT_ID="<google_cloud_project_id>"
  • VERTEX_AI_PROJECT_LOCATION="region"
  • GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"

Using Claude models through Vertex AI

JavaScript SDK: Set these environment variables:
  • GOOGLE_APPLICATION_CREDENTIALS="path/to/google_service_account_file.json"
  • CLOUD_ML_REGION="region"
JavaScript
import { PromptLayer } from "promptlayer";

const pl = new PromptLayer({ apiKey: "your_api_key" });

const response = await pl.run({
  promptName: "your-prompt-name",
  inputVariables: { variableName: "value" }
});

console.log(response.prompt_blueprint.prompt_template.messages.slice(-1)[0].content.slice(-1)[0].text);

Parameters

  • prompt_name / promptName (str, required): The name of the prompt to run.
  • prompt_version / promptVersion (int, optional): Specific version of the prompt to use.
  • prompt_release_label / promptReleaseLabel (str, optional): Release label of the prompt (e.g., “prod”, “staging”).
  • input_variables / inputVariables (Dict[str, Any], optional): Variables to be inserted into the prompt template.
  • tags (List[str], optional): Tags to associate with this run.
  • metadata (Dict[str, str], optional): Additional metadata for the run.
  • model_parameter_overrides / modelParameterOverrides (Union[Dict[str, Any], None], optional): Model-specific parameter overrides.
  • stream (bool, default=False): Whether to stream the response.
  • provider (str, optional): The LLM provider to use (e.g., “openai”, “anthropic”, “google”). This is useful if you want to override the provider specified in the prompt template.
  • model (str, optional): The model to use (e.g., “gpt-4o”, “claude-3-7-sonnet-latest”, “gemini-2.5-flash”). This is useful if you want to override the model specified in the prompt template.

Return Value

The method returns a dictionary (Python) or object (JavaScript) with the following keys:
  • request_id: Unique identifier for the request.
  • raw_response: The raw response from the LLM provider.
  • prompt_blueprint: The prompt blueprint used for the request.

Advanced Usage

Streaming

To stream the response:
JavaScript
const stream = await pl.run({
  promptName: "your-prompt",
  stream: true
});

for await (const chunk of stream) {
  // Access raw streaming response
  console.log(chunk.raw_response);

  // Access progressively built prompt blueprint
  if (chunk.prompt_blueprint) {
    const currentResponse = chunk.prompt_blueprint.prompt_template.messages.slice(-1)[0];
    if (currentResponse.content) {
      console.log("Current response:", currentResponse.content);
    }
  }
}
When streaming is enabled, each chunk includes both the raw streaming response and the progressively built prompt_blueprint, allowing you to track how the response is constructed in real-time. The request_id is only included in the final chunk.

Using Different Versions or Release Labels

JavaScript
const response = await pl.run({
  promptName: "your-prompt",
  promptVersion: 2,  // or
  promptReleaseLabel: "staging"
});

Adding Tags and Metadata

JavaScript
const response = await pl.run({
  promptName: "your-prompt",
  tags: ["test", "experiment"],
  metadata: { userId: "12345" }
});

Overriding Model Parameters

You can also override provider and model at runtime to choose a different LLM provider or model. This is useful if you want to use a different provider than the one specified in the prompt template. PromptLayer will automatically return the correct llm_kwargs for the specified provider and model with default values for the parameters corresponding to the provider and model.
Provider-Specific Schema NoticeThe llm_kwargs and raw_response objects have provider-specific structures that may change as LLM providers update their APIs. PromptLayer passes through the native format required by each provider.For stable, provider-agnostic prompt data, use prompt_blueprint.prompt_template instead of relying on the structure of provider-specific objects.
JavaScript
const response = await pl.run({
  promptName: "your-prompt",
  provider: "openai",  // or "anthropic", "google", etc.
  model: "gpt-4"  // or "claude-2", "gemini-1.5-pro", etc.
});
Make sure to set both model and provider in order to run the request against correct LLM provider with correct parameters.

Running Workflows

Use runWorkflow() to execute a PromptLayer Workflow from the JavaScript SDK. Workflows are multi-step pipelines that can combine prompt, tool, code, and conditional nodes.
JavaScript
import { PromptLayer } from "promptlayer";

const pl = new PromptLayer({ apiKey: "your_api_key" });

const response = await pl.runWorkflow({
  workflowName: "Data Analysis Workflow",
  inputVariables: { dataset_url: "https://example.com/data.csv" }
});

console.log(response);

Workflow Parameters

  • workflowName (string, required): The Workflow name to run.
  • inputVariables (object, optional): Variables to pass into the Workflow.
  • metadata (object, optional): Metadata to attach to the Workflow run.
  • workflowLabelName (string, optional): Label name for the Workflow version, such as "production".
  • workflowVersion (number, optional): Specific Workflow version number to run.
  • returnAllOutputs (boolean, default=false): Whether to return outputs for every Workflow node.

Workflow Return Value

By default, runWorkflow() returns the final output node’s value. When returnAllOutputs is true, it returns an object keyed by node name, including each node’s status, value, errors, and whether the node is an output node.
JavaScript
const response = await pl.runWorkflow({
  workflowName: "Data Analysis Workflow",
  inputVariables: { dataset_url: "https://example.com/data.csv" },
  metadata: { user_id: "12345" },
  workflowLabelName: "production",
  returnAllOutputs: true
});
Example response with returnAllOutputs: true:
{
  "Load Dataset": {
    "status": "SUCCESS",
    "value": "Loaded 100 rows",
    "error_message": null,
    "raw_error_message": null,
    "is_output_node": false
  },
  "Summarize Dataset": {
    "status": "SUCCESS",
    "value": "The dataset contains customer feedback grouped by region.",
    "error_message": null,
    "raw_error_message": null,
    "is_output_node": true
  }
}

SDK Cache

The PromptLayer JavaScript SDK supports an in-memory template cache to reduce fetch latency and improve resilience when the PromptLayer API has transient failures. Enable cache when you want to:
  • Reduce repeated template fetch latency
  • Lower dependency on real-time PromptLayer API availability
  • Continue serving recently known-good templates during temporary API issues
Pass cacheTtlSeconds when creating a client:
import { PromptLayer } from "promptlayer";

const promptLayerClient = new PromptLayer({
  apiKey: process.env.PROMPTLAYER_API_KEY,
  cacheTtlSeconds: 300, // each prompt template is cached for 5 minutes
});

How It Works

When cache is enabled, templates.get() and run() use this flow:
  1. Return a fresh cached template if available.
  2. If cache is stale or missing, fetch from API and refresh cache.
  3. If API fetch fails with a transient error and a stale template exists, serve the stale template.
Stale fallback applies to transient API failures such as retryable HTTP errors (including 429 and 5xx) and network-level issues.

Important Behavior

  • Cache is in-memory and process-local (not shared across machines/containers).
  • Requests with metadataFilters or modelParameterOverrides bypass cache.
  • Publishing via templates.publish() invalidates cache for that prompt name.
  • Call promptLayerClient.invalidate("prompt-name") to clear one prompt from cache.
  • Call promptLayerClient.invalidate() to clear the full SDK cache.

Practical Guidance

  • Start with cacheTtlSeconds between 60 and 300.
  • Use a shorter TTL if your prompts change frequently.
  • Use a longer TTL if your prompts are stable and lower latency matters most.
  • Keep throwOnError: true if you want hard failures when no cache entry is available.

Custom Logging with logRequest

If you need more control — for example, using your own LLM client, a custom provider, or background processing — you can use logRequest to manually log requests to PromptLayer.

OpenAI Example

import { PromptLayer } from "promptlayer";
import OpenAI from "openai";

const plClient = new PromptLayer();
const openai = new OpenAI();

const messages = [{ role: "user", content: "Say this is a test" }];

const requestStartTime = Date.now();
const completion = await openai.chat.completions.create({
  messages,
  model: "gpt-4o",
});
const requestEndTime = Date.now();

await plClient.logRequest({
  provider: "openai",
  model: "gpt-4o",
  input: {
    type: "chat",
    messages: messages.map(m => ({
      role: m.role,
      content: [{ type: "text", text: m.content }]
    }))
  },
  output: {
    type: "chat",
    messages: [{
      role: "assistant",
      content: [{ type: "text", text: completion.choices[0].message.content }]
    }]
  },
  requestStartTime,
  requestEndTime,
  tags: ["test"]
});

Anthropic Example

import { PromptLayer } from "promptlayer";
import Anthropic from "@anthropic-ai/sdk";

const plClient = new PromptLayer();
const anthropic = new Anthropic();

const messages = [{ role: "user", content: "How many toes do dogs have?" }];

const requestStartTime = Date.now();
const response = await anthropic.messages.create({
  messages,
  model: "claude-sonnet-4-20250514",
  max_tokens: 100,
});
const requestEndTime = Date.now();

await plClient.logRequest({
  provider: "anthropic",
  model: "claude-sonnet-4-20250514",
  input: {
    type: "chat",
    messages: messages.map(m => ({
      role: m.role,
      content: [{ type: "text", text: m.content }]
    }))
  },
  output: {
    type: "chat",
    messages: [{
      role: "assistant",
      content: [{ type: "text", text: response.content[0].text }]
    }]
  },
  requestStartTime,
  requestEndTime,
  tags: ["test-anthropic-1"]
});
See the Custom Logging documentation and Log Request API Reference for full details.

Error Handling

PromptLayer provides robust error handling with configurable error behavior for JavaScript/TypeScript applications.

Using throwOnError

By default, PromptLayer throws errors when API requests fail. You can control this behavior using the throwOnError parameter:
import { PromptLayer } from "promptlayer";

// Default behavior: throws errors on API failures
const promptLayerClient = new PromptLayer({ 
  apiKey: "pl_****", 
  throwOnError: true 
});

// Alternative: logs warnings instead of throwing errors
const promptLayerClient = new PromptLayer({ 
  apiKey: "pl_****", 
  throwOnError: false 
});
Example with error handling:
import { PromptLayer } from "promptlayer";

const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

try {
  // Attempt to get a template that might not exist
  const template = await promptLayerClient.templates.get("NonExistentTemplate");
  console.log(template);
} catch (error) {
  console.error("Failed to get template:", error.message);
}
Example with warnings (throwOnError: false):
import { PromptLayer } from "promptlayer";

// Initialize with throwOnError: false to get warnings instead of errors
const promptLayerClient = new PromptLayer({ 
  apiKey: process.env.PROMPTLAYER_API_KEY,
  throwOnError: false 
});

// This will log a warning instead of throwing an error if the template doesn't exist
const template = await promptLayerClient.templates.get("NonExistentTemplate");
// Returns null if not found, with a warning logged to console

Automatic Retry Mechanism

PromptLayer includes a built-in retry mechanism using the industry-standard p-retry library to handle transient failures gracefully. This ensures your application remains resilient when temporary issues occur. Retry Behavior:
  • Total Attempts: 4 attempts (1 initial + 3 retries)
  • Exponential Backoff: Retries wait progressively longer between attempts (2s, 4s, 8s)
  • Max Wait Time: 15 seconds maximum wait between retries
What Triggers Retries:
  • 5xx Server Errors: Internal server errors, service unavailable, etc.
  • 429 RateLimit Errors: API RateLimit Error.
  • Network Errors: Connection failures (ENOTFOUND, ECONNREFUSED, ETIMEDOUT, etc.)
What Fails Immediately (No Retries):
  • 4xx Client Errors: Bad requests, authentication errors, not found, validation errors, etc. except for 429 Ratelimit error.
The retry mechanism operates transparently in the background. You don’t need to implement retry logic yourself - PromptLayer handles it automatically for recoverable errors.

Logging

PromptLayer logs info to the console before each retry attempt. When a retry occurs, you’ll see log messages like:
INFO: Retrying PromptLayer API request in 2.0 seconds...
INFO: Retrying PromptLayer API request in 4.0 seconds...
INFO: Retrying PromptLayer API request in 8.0 seconds...
To capture these logs in your application, you can monitor console.info output or use a logging library that intercepts console methods.

Edge

PromptLayer can be used with Edge functions. Use either the run() method, logRequest, or our REST API directly.
import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

// Add this line
export const runtime = "edge";

export const POST = async () => {
  const response = await promptLayerClient.run({
    promptName: "my-prompt",
    inputVariables: { question: "What is the capital of France?" },
  });
  const content = response.prompt_blueprint.prompt_template.messages.slice(-1)[0].content;
  return Response.json(content);
};