To get started, create an account by clicking “Log in” on PromptLayer. Once logged in, click the button to create an API key and save this in a secure location (Guide to Using Env Vars).

Once you have that all set up, install PromptLayer using npm.

npm install promptlayer

Set up a PromptLayer client in your JavaScript file.

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer();

Optionally, you can specify the API key in the client.

const promptLayerClient = new PromptLayer({ apiKey: "pl_****" });
PromptLayer’s JavaScript library is not compatible with client-side (browser) environments. It is designed for use exclusively in server-side runtimes such as Node.js, Bun, or Deno.

OpenAI

In the JavaScript file where the OpenAI APIs are integrated, include the following lines. They enable PromptLayer to track your requests without additional code modifications.

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer();

const OpenAI = promptLayerClient.OpenAI;
const openai = new OpenAI();

You can then use openai as you would if you had imported it directly.

Your OpenAI API Key is never sent to our servers. All OpenAI requests are made locally from your machine, PromptLayer just logs the request.

Adding PromptLayer tags: pl_tags

PromptLayer allows you to add tags through the pl_tags argument. This allows you to track and group requests in the dashboard.

Tags are not required but we recommend them!

openai.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-3.5-turbo",
  pl_tags: ["test"],
});

Returning request id: return_pl_id

PromptLayer provides an option to retrieve the request id using the return_pl_id argument. When set to true, it returns a tuple where the second element is the request id.

openai.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-3.5-turbo",
  return_pl_id: true,
});

TypeScript

The PromptLayer JavaScript library also supports TypeScript. You can type cast the OpenAI class to typeof BaseOpenAI to get the correct typings.

import BaseOpenAI from "openai";

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

const OpenAI: typeof BaseOpenAI = promptLayerClient.OpenAI;

const openai = new OpenAI();

You can also use our custom attributes pl_tags and return_pl_id with TypeScript. You will need to add the @ts-ignore comment to ignore the TypeScript error.

openai.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-3.5-turbo",
  // @ts-ignore
  return_pl_id: true,
});

This is because the pl_tags and return_pl_id arguments are not part of the OpenAI API.

Anthropic

Using Anthropic with PromptLayer is very similar to how to one would use OpenAI.

Below is an example code snippet of the one line replacement:

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

// Instead of `import Anthropic from "@anthropic-ai/sdk";` ->
const Anthropic = promptLayerClient.Anthropic;
const anthropic = new Anthropic();

const response = anthropic.completions.create({
  prompt: `${Anthropic.HUMAN_PROMPT} How many toes do dogs have? more information more information more${Anthropic.AI_PROMPT}`,
  stop_sequences: [Anthropic.HUMAN_PROMPT],
  model: "claude-v1-100k",
  max_tokens_to_sample: 100,
  pl_tags: ["test-anthropic-1"],
  return_pl_id: true,
});
console.log(response);

Here is how it would look like on the dashbaord:

Edge

PromptLayer can be used with Edge functions. Please use either our Javascript library or REST library directly.

import BaseAnthropic from "@anthropic-ai/sdk";

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

const Anthropic: typeof BaseAnthropic = promptLayerClient.Anthropic;

const anthropic = new Anthropic();

// Add this line
export const runtime = "edge";

export const POST = async () => {
  const response = await anthropic.messages.create({
    messages: [
      {
        role: "user",
        content: "What is the capital of France?",
      },
    ],
    max_tokens: 100,
    model: "claude-3-sonnet-20240229",
  });
  return Response.json(response.content[0].text);
};

Or use streaming. Here’s another example that can be run in NextJS on edge.

import BaseAnthropic from "@anthropic-ai/sdk";
import { AnthropicStream, StreamingTextResponse } from "ai";

import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer({ apiKey: process.env.PROMPTLAYER_API_KEY });

// Add this line
export const runtime = "edge";

const Anthropic: typeof BaseAnthropic = promptLayerClient.Anthropic;
const anthropic = new Anthropic();

export const POST = async (request: Request) => {
  const { messages } = await request.json();
  const response = await anthropic.messages.create({
    messages,
    max_tokens: 100,
    model: "claude-3-sonnet-20240229",
    stream: true,
  });
  const stream = AnthropicStream(response);
  return new StreamingTextResponse(stream);
};