Skip to content

Tools

Tools let an Agent take actions – fetch data, call external APIs, execute code, or even use a computer. The JavaScript/TypeScript SDK supports six categories:

  1. Hosted OpenAI tools – run alongside the model on OpenAI servers. (web search, file search, code interpreter, image generation)
  2. Local built-in tools – run in your environment. (computer use, shell, apply_patch)
  3. Function tools – wrap any local function with a JSON schema so the LLM can call it.
  4. Agents as tools – expose an entire Agent as a callable tool.
  5. MCP servers – attach a Model Context Protocol server (local or remote).
  6. Experimental: Codex tool – wrap the Codex SDK as a function tool to run workspace-aware tasks.

When you use the OpenAIResponsesModel you can add the following built‑in tools:

ToolType stringPurpose
Web search'web_search'Internet search.
File / retrieval search'file_search'Query vector stores hosted on OpenAI.
Code Interpreter'code_interpreter'Run code in a sandboxed environment.
Image generation'image_generation'Generate images based on text.
Hosted tools
import { Agent, webSearchTool, fileSearchTool } from '@openai/agents';
const agent = new Agent({
name: 'Travel assistant',
tools: [webSearchTool(), fileSearchTool('VS_ID')],
});

The exact parameter sets match the OpenAI Responses API – refer to the official documentation for advanced options like rankingOptions or semantic filters.


Local built-in tools run in your own environment and require you to supply implementations:

  • Computer use – implement the Computer interface and pass it to computerTool().
  • Shell – implement the Shell interface and pass it to shellTool().
  • Apply patch – implement the Editor interface and pass it to applyPatchTool().

These tools execute locally and are not hosted by OpenAI. Use them when you need direct access to files, terminals, or GUI automation in your runtime. The tool calls are still requested by the OpenAI model’s responses, but your application is expected to execute them locally.

Local built-in tools
import {
Agent,
applyPatchTool,
computerTool,
shellTool,
Computer,
Editor,
Shell,
} from '@openai/agents';
const computer: Computer = {
environment: 'browser',
dimensions: [1024, 768],
screenshot: async () => '',
click: async () => {},
doubleClick: async () => {},
scroll: async () => {},
type: async () => {},
wait: async () => {},
move: async () => {},
keypress: async () => {},
drag: async () => {},
};
const shell: Shell = {
run: async () => ({
output: [
{
stdout: '',
stderr: '',
outcome: { type: 'exit', exitCode: 0 },
},
],
}),
};
const editor: Editor = {
createFile: async () => ({ status: 'completed' }),
updateFile: async () => ({ status: 'completed' }),
deleteFile: async () => ({ status: 'completed' }),
};
const agent = new Agent({
name: 'Local tools agent',
tools: [
computerTool({ computer }),
shellTool({ shell, needsApproval: true }),
applyPatchTool({ editor, needsApproval: true }),
],
});
void agent;

You can turn any function into a tool with the tool() helper.

Function tool with Zod parameters
import { tool } from '@openai/agents';
import { z } from 'zod';
const getWeatherTool = tool({
name: 'get_weather',
description: 'Get the weather for a given city',
parameters: z.object({ city: z.string() }),
async execute({ city }) {
return `The weather in ${city} is sunny.`;
},
});
FieldRequiredDescription
nameNoDefaults to the function name (e.g., get_weather).
descriptionYesClear, human-readable description shown to the LLM.
parametersYesEither a Zod schema or a raw JSON schema object. Zod parameters automatically enable strict mode.
strictNoWhen true (default), the SDK returns a model error if the arguments don’t validate. Set to false for fuzzy matching.
executeYes(args, context) => string | unknown | Promise<...> – your business logic. Non-string outputs are serialized for the model. The optional second parameter is the RunContext.
errorFunctionNoCustom handler (context, error) => string for transforming internal errors into a user-visible string.
needsApprovalNoRequire human approval before execution. See the human-in-the-loop guide.
isEnabledNoConditionally expose the tool per run; accepts a boolean or predicate.
inputGuardrailsNoGuardrails that run before the tool executes; can reject or throw. See Guardrails.
outputGuardrailsNoGuardrails that run after the tool executes; can reject or throw. See Guardrails.

If you need the model to guess invalid or partial input you can disable strict mode when using raw JSON schema:

Non-strict JSON schema tools
import { tool } from '@openai/agents';
interface LooseToolInput {
text: string;
}
const looseTool = tool({
description: 'Echo input; be forgiving about typos',
strict: false,
parameters: {
type: 'object',
properties: { text: { type: 'string' } },
required: ['text'],
additionalProperties: true,
},
execute: async (input) => {
// because strict is false we need to do our own verification
if (typeof input !== 'object' || input === null || !('text' in input)) {
return 'Invalid input. Please try again';
}
return (input as LooseToolInput).text;
},
});

Sometimes you want an Agent to assist another Agent without fully handing off the conversation. Use agent.asTool():

Agents as tools
import { Agent } from '@openai/agents';
const summarizer = new Agent({
name: 'Summarizer',
instructions: 'Generate a concise summary of the supplied text.',
});
const summarizerTool = summarizer.asTool({
toolName: 'summarize_text',
toolDescription: 'Generate a concise summary of the supplied text.',
});
const mainAgent = new Agent({
name: 'Research assistant',
tools: [summarizerTool],
});

Under the hood the SDK:

  • Creates a function tool with a single input parameter.
  • Runs the sub‑agent with that input when the tool is called.
  • Returns either the last message or the output extracted by customOutputExtractor.

When you run an agent as a tool, Agents SDK creates a runner with the default settings and run the agent with it within the function execution. If you want to provide any properties of runConfig or runOptions, you can pass them to the asTool() method to customize the runner’s behavior.

You can also set needsApproval and isEnabled on the agent tool via asTool() options to integrate with human‑in‑the‑loop flows and conditional tool availability.

Agent tools can stream all nested run events back to your app. Choose the hook style that fits how you construct the tool:

Streaming agent tools
import { Agent } from '@openai/agents';
const billingAgent = new Agent({
name: 'Billing Agent',
instructions: 'Answer billing questions and compute simple charges.',
});
const billingTool = billingAgent.asTool({
toolName: 'billing_agent',
toolDescription: 'Handles customer billing questions.',
// onStream: simplest catch-all when you define the tool inline.
onStream: (event) => {
console.log(`[onStream] ${event.event.type}`, event);
},
});
// on(eventName) lets you subscribe selectively (or use '*' for all).
billingTool.on('run_item_stream_event', (event) => {
console.log('[on run_item_stream_event]', event);
});
billingTool.on('raw_model_stream_event', (event) => {
console.log('[on raw_model_stream_event]', event);
});
const orchestrator = new Agent({
name: 'Support Orchestrator',
instructions: 'Delegate billing questions to the billing agent tool.',
tools: [billingTool],
});
  • Event types match RunStreamEvent['type']: raw_model_stream_event, run_item_stream_event, agent_updated_stream_event.
  • onStream is the simplest “catch-all” and works well when you declare the tool inline (tools: [agent.asTool({ onStream })]). Use it if you do not need per-event routing.
  • on(eventName, handler) lets you subscribe selectively (or with '*') and is best when you need finer-grained handling or want to attach listeners after creation.
  • If you provide either onStream or any on(...) handler, the agent-as-tool will run in streaming mode automatically; without them it stays on the non-streaming path.
  • Handlers are invoked in parallel so a slow onStream callback will not block on(...) handlers (and vice versa).
  • toolCallId is provided when the tool was invoked via a model tool call; direct invoke() calls or provider quirks may omit it.

You can expose tools via Model Context Protocol (MCP) servers and attach them to an agent. For instance, you can use MCPServerStdio to spawn and connect to the stdio MCP server:

Local MCP server
import { Agent, MCPServerStdio } from '@openai/agents';
const server = new MCPServerStdio({
fullCommand: 'npx -y @modelcontextprotocol/server-filesystem ./sample_files',
});
await server.connect();
const agent = new Agent({
name: 'Assistant',
mcpServers: [server],
});

See filesystem-example.ts for a complete example. Also, if you’re looking for a comprehensitve guide for MCP server tool integration, refer to MCP guide for details.


@openai/agents-extensions/experimental/codex provides codexTool(), a function tool that routes model tool calls to the Codex SDK so the agent can run workspace-scoped tasks (shell, file edits, MCP tools) autonomously. This surface is experimental and may change.

Quick start:

Experimental Codex tool
import { Agent } from '@openai/agents';
import { codexTool } from '@openai/agents-extensions/experimental/codex';
export const codexAgent = new Agent({
name: 'Codex Agent',
instructions:
'Use the codex tool to inspect the workspace and answer the question. When skill names, which usually start with `$`, are mentioned, you must rely on the codex tool to use the skill and answer the question.',
tools: [
codexTool({
sandboxMode: 'workspace-write',
workingDirectory: '/path/to/repo',
defaultThreadOptions: {
model: 'gpt-5.2-codex',
networkAccessEnabled: true,
webSearchEnabled: false,
},
persistSession: true,
}),
],
});

What to know:

  • Auth: supply CODEX_API_KEY (preferred) or OPENAI_API_KEY, or pass codexOptions.apiKey.
  • Inputs: strict schema—inputs must contain at least one { type: 'text', text } or { type: 'local_image', path }.
  • Safety: pair sandboxMode with workingDirectory; set skipGitRepoCheck if the directory is not a Git repo.
  • Behavior: persistSession: true reuses a single Codex thread and returns its threadId; you can surface it for resumable work.
  • Streaming: onStream mirrors Codex events (reasoning, command execution, MCP tool calls, file changes, web search) so you can log or trace progress.
  • Outputs: tool result includes response, usage, and threadId, and Codex token usage is recorded in RunContext.
  • Structure: outputSchema enforces structured Codex responses per turn when you need typed outputs.

Refer to the Agents guide for controlling when and how a model must use tools (tool_choice, toolUseBehavior, etc.).


  • Short, explicit descriptions – describe what the tool does and when to use it.
  • Validate inputs – use Zod schemas for strict JSON validation where possible.
  • Avoid side‑effects in error handlerserrorFunction should return a helpful string, not throw.
  • One responsibility per tool – small, composable tools lead to better model reasoning.

  • Learn about forcing tool use.
  • Add guardrails to validate tool inputs or outputs.
  • Dive into the TypeDoc reference for tool() and the various hosted tool types.