Skip to main content
The Prompt Registry supports all major tool calling formats, including OpenAI tools, OpenAI functions, Anthropic tools, and Gemini tools. You can create tool schemas interactively, and your prompt template will seemlessly work on any LLM. Tool calling in PromptLayer is model-agnostic.
Learn more about when you should use tools on our blog.

What is Tool Calling?

Tool calling (previously known as function calling) is a powerful feature that allows Language Models (LLMs) to return structured data and invoke predefined functions with JSON arguments. This capability enables more complex interactions and structured outputs from LLMs. Key benefits of tool calling include:
  • Structured Outputs: Tool arguments are always in JSON format, enforced by JSONSchema at the API level. See our Structured Outputs documentation for more details.
  • Efficient Communication: Tool calling is a concept built into the model, reducing token usage and improving understanding.
  • Model Routing: Facilitates setting up modular prompts with specific responsibilities.
  • Prompt Injection Protection: Strict schema definitions at the model level make it harder to “jailbreak” the model.

Creating Custom Tools

Creating Visually

Tools can be defined, called, and set up visually through the Prompt Registry.

Publishing Programmatically

To publish a prompt template with tools programmatically, you can add the arguments tools and tool_choice to your prompt_template object. This is similar to how you would publish a regular prompt template.

Tool Variables

Tool variables allow you to dynamically inject tools at runtime through input variables, rather than defining them statically in your prompt template. This is useful when:
  • Different customers or tenants need different sets of tools
  • Your available tools change based on runtime context (e.g., user permissions, feature flags)
  • You want to manage tool definitions outside of your prompt template

Adding a Tool Variable

  1. Open the Tool & Output Editor in the Prompt Registry
  2. Click the dropdown arrow on the Add Tool button and select Tool Variable
  3. Enter a variable name (e.g., dynamic_tools) and click Add
  4. The variable will appear in your Input Variable Sets alongside other template variables
Tool variables can coexist with static tool definitions. For example, you might have a fixed get_weather tool alongside a customer_tools variable that injects customer-specific tools at runtime.

Passing Tools at Runtime

When running a prompt with tool variables, pass the tool definitions as an array in your input_variables:
response = pl.run(
    prompt_name="my-agent",
    input_variables={
        "dynamic_tools": [
            {
                "name": "get_knowledge",
                "description": "Search the knowledge base",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {"type": "string", "description": "Search query"}
                    },
                    "required": ["query"]
                }
            },
            {
                "name": "create_ticket",
                "description": "Create a support ticket",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "title": {"type": "string"},
                        "priority": {"type": "string", "enum": ["low", "medium", "high"]}
                    },
                    "required": ["title"]
                }
            }
        ]
    }
)
Each tool definition should include name, description, and parameters (using JSON Schema format). Anthropic-style input_schema is also accepted as an alternative to parameters.
Tool variables are expanded before the prompt is sent to the LLM provider. The model sees them as regular tool definitions, so they work with any provider that supports tool calling.

Built-in Tools

PromptLayer supports provider-native built-in tools across multiple LLM providers. These pre-built tools enable your prompts to access real-time information, execute code, search through files, and more—all without writing custom function definitions. Built-in tools are available for the following providers:

How to Add Built-in Tools

  1. Open your prompt in the Prompt Registry and navigate to the prompt editor
  2. Select your LLM provider in the provider settings at the bottom of the editor
  3. Open the Function & Output Schema Editor by clicking the Functions & Output button
  4. Click the Built-in tools button (on the right side) to browse available tools for your selected provider
  5. Click Add Tool for the tool you want to use — it will appear in your function definitions list
  6. Configure tool_choice (optional) — set to auto to let the model decide when to use the tool
  7. Save and run your prompt — the model will use the built-in tools when appropriate
For OpenAI and Azure OpenAI, built-in tools require the Responses API. Switch from Chat Completions API to Responses API in the API dropdown before adding built-in tools.

OpenAI and Azure OpenAI (Responses API)

OpenAI’s Responses API includes powerful pre-built tools that work seamlessly with PromptLayer. These tools are available for both the OpenAI and Azure OpenAI providers.

Available Tools

ToolDescription
Web SearchGet fast, up-to-date answers with citations from the web
File SearchSearch through uploaded files and documents using Vector Stores
Code InterpreterWrite and execute Python code in a secure, sandboxed environment
Image GenerationGenerate or edit images using a text prompt
MCPConnect to remote MCP servers or OpenAI-maintained connectors for external tools
ShellExecute shell commands in a managed environment
Apply PatchPropose structured diffs to create, update, or delete files

Using File Search with Vector Stores

OpenAI’s File Search tool enables semantic search over your documents using Vector Stores. This powerful feature allows your prompts to automatically retrieve relevant information from uploaded files during inference, making it perfect for building RAG (Retrieval-Augmented Generation) systems, knowledge bases, and documentation assistants.
Setting Up File Search
For the File Search tool, you’ll need to create and attach Vector Stores containing your documents:
  1. Enable File Search by following the steps above to add it as a built-in tool
  2. Create and configure a Vector Store:
    • Click Manage Vector Stores in the File Search configuration
    • Click Create to make a new vector store with a custom name
    • Upload files via drag-and-drop or file selection (single or multiple files)
    • View storage usage, file counts, and manage attached files
  3. Attach Vector Stores to your prompt:
    • Select one or more vector stores using checkboxes
    • Click Save Selection to attach them
    • The vector store IDs are added to your tool configuration
  4. Run your prompt:
    • The LLM will automatically search vector stores when relevant
    • Retrieved context is used to generate informed responses
    • Sources can be traced back to specific documents

Using Code Interpreter

OpenAI’s Code Interpreter tool enables your prompts to write and execute Python code within a secure, sandboxed environment. This powerful feature allows for dynamic problem-solving, data analysis, visualization generation, and file processing—all without writing custom function definitions. To enable Code Interpreter, follow the steps above to add it as a built-in tool. The tool uses container type "auto" by default, which automatically manages the execution environment. The LLM will automatically use Code Interpreter when it needs to perform calculations, analyze data, create visualizations, or process files. For detailed information about Code Interpreter’s capabilities, file handling, and configuration options, see the OpenAI Code Interpreter Guide.

Using Image Generation

OpenAI’s Image Generation tool enables the model to generate or edit images using a text prompt directly within a conversation. When the model determines that an image should be created, it invokes the image_generation tool with an optimized prompt and returns the generated image in the response. To enable Image Generation, follow the steps above to add it as a built-in tool. Once enabled, the model will automatically generate images when the conversation calls for it. Generated images appear inline in the response with:
  • A collapsible revised prompt showing the optimized text the model used for generation
  • Generation parameters such as size, quality, background, and output format
  • The generated image displayed in a rich card format
The model can generate multiple images in a single response — consecutive image generation calls are grouped together for clean display. The model may also include descriptive text alongside the generated images. For a comprehensive guide to image generation across all providers (including the dedicated Images API and Gemini native image generation), see the Image Generation documentation.

Learn More


Anthropic

Anthropic provides native built-in tools for Claude models that enable code execution, web search, and system-level interactions directly within conversations.

Available Tools

ToolDescription
Web SearchSearch the web for real-time information. Claude uses this automatically when current information would help answer a question. Results include citations.
BashExecute bash commands within a sandboxed environment. Useful for automation workflows and system interactions.
Code ExecutionExecute code in a sandboxed environment with access to bash and a text editor. Ideal for data analysis, computation, and dynamic problem-solving.
Text EditorView, create, and edit text files with commands like view, str_replace, create, and insert. Enables Claude to work with files during conversations.
Anthropic built-in tools are available for all Claude models that support tool use. They work with both the direct Anthropic provider and Claude models on Vertex AI.

Learn More


Google (Gemini)

Google provides native built-in tools for Gemini models that enable web grounding, location data, code execution, URL analysis, and file search capabilities.

Available Tools

ToolDescription
Google SearchGround model responses with real-time web search results using Google Search. Provides up-to-date information with source citations.
Google MapsGround responses with Google Maps place data including reviews, addresses, and business hours.
Code ExecutionExecute Python code in a sandboxed environment for data analysis, calculations, and dynamic computations.
URL ContextRetrieve and analyze content from URLs provided in the prompt. Allows the model to process web page content during conversations.
File SearchSearch through uploaded files using semantic retrieval. Configure file search stores to index documents for automatic retrieval during conversations.

Using File Search with Google

Google’s File Search tool uses file search stores for document indexing and retrieval:
  1. Add the File Search tool from the built-in tools menu
  2. Configure a file search store — specify the store name(s) that contain your indexed documents
  3. Run your prompt — Gemini will automatically search through indexed documents when relevant context is needed

Learn More


Vertex AI

Vertex AI supports built-in tools from both Google and Anthropic, depending on the model family you are using. PromptLayer automatically shows the correct set of tools based on your selected model.

For Gemini Models on Vertex AI

When using Gemini models (e.g., gemini-3.1-pro-preview), the following Google-native tools are available:
ToolDescription
Web SearchGround responses with real-time Google Search results
Google MapsAccess Google Maps place data for location-based grounding
Code ExecutionExecute Python code in a sandboxed environment
URL ContextRetrieve and analyze content from URLs in the prompt

For Claude Models on Vertex AI

When using Claude models (e.g., claude-sonnet-4-20250514) through Vertex AI, the following Anthropic-native tools are available:
ToolDescription
Web SearchSearch the web for real-time information with citations
BashExecute bash commands in a sandboxed environment
Text EditorView, create, and edit text files with structured commands
PromptLayer automatically detects the model family (Gemini vs Claude) and displays the appropriate set of built-in tools in the editor. You don’t need to manually configure which tool set to use.

Learn More