The Prompt Registry supports all major tool calling formats, including OpenAI tools, OpenAI functions, Anthropic tools, and Gemini tools.You can create tool schemas interactively, and your prompt template will seemlessly work on any LLM. Tool calling in PromptLayer is model-agnostic.
Learn more about when you should use tools on our blog.
Tool calling (previously known as function calling) is a powerful feature that allows Language Models (LLMs) to return structured data and invoke predefined functions with JSON arguments. This capability enables more complex interactions and structured outputs from LLMs.Key benefits of tool calling include:
Structured Outputs: Tool arguments are always in JSON format, enforced by JSONSchema at the API level. See our Structured Outputs documentation for more details.
Efficient Communication: Tool calling is a concept built into the model, reducing token usage and improving understanding.
Model Routing: Facilitates setting up modular prompts with specific responsibilities.
Prompt Injection Protection: Strict schema definitions at the model level make it harder to “jailbreak” the model.
To publish a prompt template with tools programmatically, you can add the arguments tools and tool_choice to your prompt_template object. This is similar to how you would publish a regular prompt template.
Tool variables allow you to dynamically inject tools at runtime through input variables, rather than defining them statically in your prompt template. This is useful when:
Different customers or tenants need different sets of tools
Your available tools change based on runtime context (e.g., user permissions, feature flags)
You want to manage tool definitions outside of your prompt template
Open the Tool & Output Editor in the Prompt Registry
Click the dropdown arrow on the Add Tool button and select Tool Variable
Enter a variable name (e.g., dynamic_tools) and click Add
The variable will appear in your Input Variable Sets alongside other template variables
Tool variables can coexist with static tool definitions. For example, you might have a fixed get_weather tool alongside a customer_tools variable that injects customer-specific tools at runtime.
Each tool definition should include name, description, and parameters (using JSON Schema format). Anthropic-style input_schema is also accepted as an alternative to parameters.
Tool variables are expanded before the prompt is sent to the LLM provider. The model sees them as regular tool definitions, so they work with any provider that supports tool calling.
PromptLayer supports provider-native built-in tools across multiple LLM providers. These pre-built tools enable your prompts to access real-time information, execute code, search through files, and more—all without writing custom function definitions.Built-in tools are available for the following providers:
Open your prompt in the Prompt Registry and navigate to the prompt editor
Select your LLM provider in the provider settings at the bottom of the editor
Open the Function & Output Schema Editor by clicking the Functions & Output button
Click the Built-in tools button (on the right side) to browse available tools for your selected provider
Click Add Tool for the tool you want to use — it will appear in your function definitions list
Configure tool_choice (optional) — set to auto to let the model decide when to use the tool
Save and run your prompt — the model will use the built-in tools when appropriate
For OpenAI and Azure OpenAI, built-in tools require the Responses API. Switch from Chat Completions API to Responses API in the API dropdown before adding built-in tools.
OpenAI’s Responses API includes powerful pre-built tools that work seamlessly with PromptLayer. These tools are available for both the OpenAI and Azure OpenAI providers.
OpenAI’s File Search tool enables semantic search over your documents using Vector Stores. This powerful feature allows your prompts to automatically retrieve relevant information from uploaded files during inference, making it perfect for building RAG (Retrieval-Augmented Generation) systems, knowledge bases, and documentation assistants.
Setting Up File Search
For the File Search tool, you’ll need to create and attach Vector Stores containing your documents:
Enable File Search by following the steps above to add it as a built-in tool
Create and configure a Vector Store:
Click Manage Vector Stores in the File Search configuration
Click Create to make a new vector store with a custom name
Upload files via drag-and-drop or file selection (single or multiple files)
View storage usage, file counts, and manage attached files
Attach Vector Stores to your prompt:
Select one or more vector stores using checkboxes
Click Save Selection to attach them
The vector store IDs are added to your tool configuration
Run your prompt:
The LLM will automatically search vector stores when relevant
Retrieved context is used to generate informed responses
OpenAI’s Code Interpreter tool enables your prompts to write and execute Python code within a secure, sandboxed environment. This powerful feature allows for dynamic problem-solving, data analysis, visualization generation, and file processing—all without writing custom function definitions.To enable Code Interpreter, follow the steps above to add it as a built-in tool. The tool uses container type "auto" by default, which automatically manages the execution environment. The LLM will automatically use Code Interpreter when it needs to perform calculations, analyze data, create visualizations, or process files.For detailed information about Code Interpreter’s capabilities, file handling, and configuration options, see the OpenAI Code Interpreter Guide.
OpenAI’s Image Generation tool enables the model to generate or edit images using a text prompt directly within a conversation. When the model determines that an image should be created, it invokes the image_generation tool with an optimized prompt and returns the generated image in the response.To enable Image Generation, follow the steps above to add it as a built-in tool. Once enabled, the model will automatically generate images when the conversation calls for it.Generated images appear inline in the response with:
A collapsible revised prompt showing the optimized text the model used for generation
Generation parameters such as size, quality, background, and output format
The generated image displayed in a rich card format
The model can generate multiple images in a single response — consecutive image generation calls are grouped together for clean display. The model may also include descriptive text alongside the generated images.For a comprehensive guide to image generation across all providers (including the dedicated Images API and Gemini native image generation), see the Image Generation documentation.
Anthropic provides native built-in tools for Claude models that enable code execution, web search, and system-level interactions directly within conversations.
Search the web for real-time information. Claude uses this automatically when current information would help answer a question. Results include citations.
Bash
Execute bash commands within a sandboxed environment. Useful for automation workflows and system interactions.
Code Execution
Execute code in a sandboxed environment with access to bash and a text editor. Ideal for data analysis, computation, and dynamic problem-solving.
Text Editor
View, create, and edit text files with commands like view, str_replace, create, and insert. Enables Claude to work with files during conversations.
Anthropic built-in tools are available for all Claude models that support tool use. They work with both the direct Anthropic provider and Claude models on Vertex AI.
Google provides native built-in tools for Gemini models that enable web grounding, location data, code execution, URL analysis, and file search capabilities.
Vertex AI supports built-in tools from both Google and Anthropic, depending on the model family you are using. PromptLayer automatically shows the correct set of tools based on your selected model.
When using Claude models (e.g., claude-sonnet-4-20250514) through Vertex AI, the following Anthropic-native tools are available:
Tool
Description
Web Search
Search the web for real-time information with citations
Bash
Execute bash commands in a sandboxed environment
Text Editor
View, create, and edit text files with structured commands
PromptLayer automatically detects the model family (Gemini vs Claude) and displays the appropriate set of built-in tools in the editor. You don’t need to manually configure which tool set to use.