PromptLayer works seamlessly with LangChain. LangChain is a popular Python library aimed at assisting in the development of LLM applications. It provides a lot of helpful features like chains, agents, and memory.

Using PromptLayer with LangChain is simple. See the LangChain docs below:

There are two main ways to use LangChain with PromptLayer. The recommended way (as of LangChain version 0.0.221) is to use the PromptLayerCallbackHandler. Alternatively, you can use a PromptLayer-specific LLM or Chat model.

Using Callbacks

Right now, callbacks only work with LangChain in Python.

This is the recommended way to use LangChain with PromptLayer. It is simpler and more extendible than the other method below.

Every LLM supported by LangChain works with PromptLayerโ€™s callback.

Start by importing PromptLayerCallbackHandler. This callback function will log your request after each LLM response.

import promptlayer # Don't forget this ๐Ÿฐ
from langchain.callbacks import PromptLayerCallbackHandler

Now, when instantiating a model, just include the PromptLayerCallbackHandler in the callbacks.

llm = OpenAI(
    model_name="gpt-3.5-turbo-instruct",
    callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain"])],
)

๐ŸŽ‰ Thatโ€™s it! ๐ŸŽ‰

Full Examples

Below are some full examples using PromptLayer with various LLMs through LangChain.

OpenAI

GPT4All

import promptlayer # Don't forget this ๐Ÿฐ
from langchain.callbacks import PromptLayerCallbackHandler

from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)

response = model("Once upon a time, ", callbacks=[
    PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])])

HuggingFace Hub

You can use the HuggingFace wrapper to try out many different LLMs.

import promptlayer # Don't forget this ๐Ÿฐ
from langchain.callbacks import PromptLayerCallbackHandler

from langchain import HuggingFaceHub

falcon_repo_id = "tiiuae/falcon-7b-instruct"

llm = HuggingFaceHub(repo_id=falcon_repo_id, 
        huggingfacehub_api_token="<HUGGINFACEHUB_API_TOKEN>", 
        model_kwargs={"temperature": 1.0, "max_length": 64}, 
        callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "huggingface"])])

llm("How do you make a layer cake?")

Async Requests

import promptlayer # Don't forget this ๐Ÿฐ
from langchain.callbacks import PromptLayerCallbackHandler

# OpenAI Completion Model
from langchain.llms import OpenAI

import asyncio
async def async_generate(llm):
    resp = await llm.agenerate(['My name is "'])
    print(resp, pl_request_id)

asyncio.run(async_generate(openai_llm))

PromptLayer Request ID

The PromptLayer request ID is used to tag requests with metadata, scores, associated prompt templates, and more.

PromptLayerCallbackHandler can optionally take in its own callback function that takes the request ID as an argument.

PromptLayer OpenAI Models

Alternatively, the older (but still supported) way to use LangChain with PromptLayer is through specific PromptLayer LLMs and Chat Models.

Please note: Do not use these models in addition to the callback. Use one or the other.

See below for examples: