Agents
Agents are the main building‑block of the OpenAI Agents SDK. An Agent is a Large Language Model (LLM) that has been configured with:
- Instructions – the system prompt that tells the model who it is and how it should respond.
- Model – which OpenAI model to call, plus any optional model tuning parameters.
- Tools – a list of functions or APIs the LLM can invoke to accomplish a task.
import { Agent } from '@openai/agents';
const agent = new Agent({ name: 'Haiku Agent', instructions: 'Always respond in haiku form.', model: 'gpt-5.4', // optional – falls back to the default model});Use this page when you want to define or customize a single
Agent. If you are deciding how several agents should collaborate, read Agent orchestration.
Choose the next guide
Section titled “Choose the next guide”Use this page as the hub for agent definition. Jump out to the adjacent guide that matches the next decision you need to make.
| If you want to… | Read next |
|---|---|
| Choose a model or configure stored prompts | Models |
| Add capabilities to the agent | Tools |
| Decide between managers and handoffs | Agent orchestration |
| Configure handoff behavior | Handoffs |
| Run turns, stream events, or manage state | Running agents |
| Inspect final output, run items, or resume | Results |
The rest of this page walks through every Agent feature in more detail.
Agent fundamentals
Section titled “Agent fundamentals”Basic configuration
Section titled “Basic configuration”The Agent constructor takes a single configuration object. The most commonly‑used properties are shown below.
| Property | Required | Description |
|---|---|---|
name | yes | A short human‑readable identifier. |
instructions | yes | System prompt (string or function – see Dynamic instructions). |
prompt | no | OpenAI Responses API prompt configuration. Accepts a static prompt object or a function. See Prompt. |
handoffDescription | no | Short description used when this agent is offered as a handoff tool. |
handoffs | no | Delegate the conversation to specialist agents. See Composition patterns and the Handoffs guide. |
model | no | Model name or a custom Model implementation. |
modelSettings | no | Tuning parameters (temperature, top_p, etc.). See Models. If the properties you need aren’t at the top level, you can include them under providerData. |
tools | no | Array of Tool instances the model can call. See Tools. |
mcpServers | no | MCP-backed tools for the agent. See the MCP guide. |
inputGuardrails | no | Guardrails applied to the first user input for this agent chain. See Guardrails. |
outputGuardrails | no | Guardrails applied to the final output for this agent. See Guardrails. |
outputType | no | Return structured output instead of plain text. See Output types and Results. |
toolUseBehavior | no | Control whether function-tool results loop back to the model or finish the run. See Forcing tool use. |
resetToolChoice | no | Reset toolChoice to the default after a tool call (default: true) to prevent tool-use loops. See Forcing tool use. |
handoffOutputTypeWarningEnabled | no | Emit a warning when handoff output types differ (default: true). See Results. |
import { Agent, tool } from '@openai/agents';import { z } from 'zod';
const getWeather = tool({ name: 'get_weather', description: 'Return the weather for a given city.', parameters: z.object({ city: z.string() }), async execute({ city }) { return `The weather in ${city} is sunny.`; },});
const agent = new Agent({ name: 'Weather bot', instructions: 'You are a helpful weather bot.', model: 'gpt-4.1', tools: [getWeather],});Context
Section titled “Context”Agents are generic on their context type – i.e. Agent<TContext, TOutput>. The context is a dependency‑injection object that you create and pass to Runner.run(). It is forwarded to every tool, guardrail, handoff, etc. and is useful for storing state or providing shared services (database connections, user metadata, feature flags, …).
import { Agent } from '@openai/agents';
interface Purchase { id: string; uid: string; deliveryStatus: string;}interface UserContext { uid: string; isProUser: boolean;
// this function can be used within tools fetchPurchases(): Promise<Purchase[]>;}
const agent = new Agent<UserContext>({ name: 'Personal shopper', instructions: 'Recommend products the user will love.',});
// Laterimport { run } from '@openai/agents';
const result = await run(agent, 'Find me a new pair of running shoes', { context: { uid: 'abc', isProUser: true, fetchPurchases: async () => [] },});Output types
Section titled “Output types”By default, an Agent returns plain text (string). If you want the model to return a structured object you can specify the outputType property. The SDK accepts:
- A Zod schema (
z.object({...})). - Any JSON‑schema‑compatible object.
import { Agent } from '@openai/agents';import { z } from 'zod';
const CalendarEvent = z.object({ name: z.string(), date: z.string(), participants: z.array(z.string()),});
const extractor = new Agent({ name: 'Calendar extractor', instructions: 'Extract calendar events from the supplied text.', outputType: CalendarEvent,});When outputType is provided, the SDK automatically uses structured outputs instead of plain text.
OpenAI platform mapping
Section titled “OpenAI platform mapping”Some agent concepts map directly to OpenAI platform concepts, while others are configured when you run the agent rather than when you define it.
| SDK concept | OpenAI guide | When it matters |
|---|---|---|
outputType | Structured Outputs | The agent should return typed JSON or a Zod-validated object instead of text. |
tools / hosted tools | Tools guide | The model should search, retrieve, execute code, or call your functions/tools. |
conversationId / previousResponseId | Conversation state | You want OpenAI to persist or chain conversation state between turns. |
conversationId and previousResponseId are run-time controls, not Agent constructor fields. Use Running agents when you need those SDK entry points.
Composition patterns
Section titled “Composition patterns”Two SDK entry points show up most often when an agent participates in a larger workflow:
- Manager (agents as tools) – a central agent owns the conversation and invokes specialized agents that are exposed as tools.
- Handoffs – the initial agent delegates the entire conversation to a specialist once it has identified the user’s request.
These approaches are complementary. Managers give you a single place to enforce guardrails or rate limits, while handoffs let each agent focus on a single task without retaining control of the conversation. For the design tradeoffs and when to choose each pattern, see Agent orchestration.
Manager (agents as tools)
Section titled “Manager (agents as tools)”In this pattern the manager never hands over control—the LLM uses the tools and the manager summarizes the final answer. Read more in the tools guide.
import { Agent } from '@openai/agents';
const bookingAgent = new Agent({ name: 'Booking expert', instructions: 'Answer booking questions and modify reservations.',});
const refundAgent = new Agent({ name: 'Refund expert', instructions: 'Help customers process refunds and credits.',});
const customerFacingAgent = new Agent({ name: 'Customer-facing agent', instructions: 'Talk to the user directly. When they need booking or refund help, call the matching tool.', tools: [ bookingAgent.asTool({ toolName: 'booking_expert', toolDescription: 'Handles booking questions and requests.', }), refundAgent.asTool({ toolName: 'refund_expert', toolDescription: 'Handles refund questions and requests.', }), ],});Handoffs
Section titled “Handoffs”With handoffs the triage agent routes requests, but once a handoff occurs the specialist agent owns the conversation until it produces a final output. This keeps prompts short and lets you reason about each agent independently. Learn more in the handoffs guide.
import { Agent } from '@openai/agents';
const bookingAgent = new Agent({ name: 'Booking Agent', instructions: 'Help users with booking requests.',});
const refundAgent = new Agent({ name: 'Refund Agent', instructions: 'Process refund requests politely and efficiently.',});
// Use Agent.create method to ensure the finalOutput type considers handoffsconst triageAgent = Agent.create({ name: 'Triage Agent', instructions: `Help the user with their questions. If the user asks about booking, hand off to the booking agent. If the user asks about refunds, hand off to the refund agent.`.trimStart(), handoffs: [bookingAgent, refundAgent],});If your handoff targets can return different output types, prefer Agent.create(...) over new Agent(...). That lets TypeScript infer the union of possible finalOutput shapes across the handoff graph and avoids the runtime warning controlled by handoffOutputTypeWarningEnabled. See the results guide for an end-to-end example.
Advanced configuration and runtime controls
Section titled “Advanced configuration and runtime controls”Dynamic instructions
Section titled “Dynamic instructions”instructions can be a function instead of a string. The function receives the current RunContext and the Agent instance and can return a string or a Promise<string>.
import { Agent, RunContext } from '@openai/agents';
interface UserContext { name: string;}
function buildInstructions(runContext: RunContext<UserContext>) { return `The user's name is ${runContext.context.name}. Be extra friendly!`;}
const agent = new Agent<UserContext>({ name: 'Personalized helper', instructions: buildInstructions,});Both synchronous and async functions are supported.
Dynamic prompts
Section titled “Dynamic prompts”prompt supports the same callback shape as instructions, but returns a prompt configuration object instead of a string. This is useful when the prompt ID, version, or variables depend on the current run context.
import { Agent, RunContext } from '@openai/agents';
interface PromptContext { customerTier: 'free' | 'pro';}
function buildPrompt(runContext: RunContext<PromptContext>) { return { promptId: 'pmpt_support_agent', version: '7', variables: { customer_tier: runContext.context.customerTier, }, };}
const agent = new Agent<PromptContext>({ name: 'Prompt-backed helper', prompt: buildPrompt,});This is only supported when you use the OpenAI Responses API. Both synchronous and async functions are supported.
Lifecycle hooks
Section titled “Lifecycle hooks”For advanced use‑cases you can observe the Agent lifecycle by listening on events.
Agent instances emit lifecycle events for that specific agent instance, while Runner emits the same event names as a single stream across the whole run. This is useful for multi-agent workflows where you want one place to observe handoffs and tool calls.
The shared event names are:
| Event | Agent hook arguments | Runner hook arguments |
|---|---|---|
agent_start | (context, agent, turnInput?) | (context, agent, turnInput?) |
agent_end | (context, output) | (context, agent, output) |
agent_handoff | (context, nextAgent) | (context, fromAgent, toAgent) |
agent_tool_start | (context, tool, { toolCall }) | (context, agent, tool, { toolCall }) |
agent_tool_end | (context, tool, result, { toolCall }) | (context, agent, tool, result, { toolCall }) |
import { Agent } from '@openai/agents';
const agent = new Agent({ name: 'Verbose agent', instructions: 'Explain things thoroughly.',});
agent.on('agent_start', (ctx, agent) => { console.log(`[${agent.name}] started`);});agent.on('agent_end', (ctx, output) => { console.log(`[agent] produced:`, output);});Guardrails
Section titled “Guardrails”Guardrails allow you to validate or transform user input and agent output. They are configured via the inputGuardrails and outputGuardrails arrays. See the guardrails guide for details.
Cloning / copying agents
Section titled “Cloning / copying agents”Need a slightly modified version of an existing agent? Use the clone() method, which returns an entirely new Agent instance.
import { Agent } from '@openai/agents';
const pirateAgent = new Agent({ name: 'Pirate', instructions: 'Respond like a pirate – lots of “Arrr!”', model: 'gpt-5.4',});
const robotAgent = pirateAgent.clone({ name: 'Robot', instructions: 'Respond like a robot – be precise and factual.',});Forcing tool use
Section titled “Forcing tool use”Supplying tools doesn’t guarantee the LLM will call one. You can force tool use with modelSettings.toolChoice:
'auto'(default) – the LLM decides whether to use a tool.'required'– the LLM must call a tool (it can choose which one).'none'– the LLM must not call a tool.- A specific tool name, e.g.
'calculator'– the LLM must call that particular tool.
When the available tool is computerTool() on OpenAI Responses, toolChoice: 'computer' is special: it forces the GA built-in computer tool instead of treating 'computer' as a plain function name. The SDK also accepts preview-compatible computer selectors for older integrations, but new code should prefer 'computer'. If no computer tool is available, the string behaves like any other function tool name.
import { Agent, tool } from '@openai/agents';import { z } from 'zod';
const calculatorTool = tool({ name: 'Calculator', description: 'Use this tool to answer questions about math problems.', parameters: z.object({ question: z.string() }), execute: async (input) => { throw new Error('TODO: implement this'); },});
const agent = new Agent({ name: 'Strict tool user', instructions: 'Always answer using the calculator tool.', tools: [calculatorTool], modelSettings: { toolChoice: 'required' },});When you use deferred Responses tools such as toolNamespace(), function tools with deferLoading: true, or hosted MCP tools with deferLoading: true, keep modelSettings.toolChoice on 'auto'. The SDK rejects forcing a deferred tool or the built-in tool_search helper by name because the model needs to decide when to load those definitions. See the Tools guide for the full tool-search setup.
Preventing infinite loops
Section titled “Preventing infinite loops”After a tool call the SDK automatically resets toolChoice back to 'auto'. This prevents the model from entering an infinite loop where it repeatedly tries to call the tool. You can override this behavior via the resetToolChoice flag or by configuring toolUseBehavior:
'run_llm_again'(default) – run the LLM again with the tool result.'stop_on_first_tool'– treat the first tool result as the final answer.{ stopAtToolNames: ['my_tool'] }– stop when any of the listed tools is called.(context, toolResults) => ...– custom function returning whether the run should finish.
const agent = new Agent({ ..., toolUseBehavior: 'stop_on_first_tool',});Note: toolUseBehavior only applies to function tools. Hosted tools always return to the model for processing.
Related guides
Section titled “Related guides”- Models for model selection, stored prompts, and provider configuration.
- Tools for function tools, hosted tools, MCP, and
agent.asTool(). - Agent orchestration for choosing between managers, handoffs, and code-driven orchestration.
- Handoffs for configuring specialist delegation.
- Running agents for executing turns, streaming, and conversation state.
- Results for
finalOutput, run items, and resume state. - Explore the full TypeDoc reference under @openai/agents in the sidebar.