Skip to content

Agent

The class representing an AI agent configured with instructions, tools, guardrails, handoffs and more.

We strongly recommend passing instructions, which is the “system prompt” for the agent. In addition, you can pass handoffDescription, which is a human-readable description of the agent, used when the agent is used inside tools/handoffs.

Agents are generic on the context type. The context is a (mutable) object you create. It is passed to tool functions, handoffs, guardrails, etc.

Type Parameter Default type

TContext

UnknownContext

TOutput extends AgentOutputType

TextOutput

new Agent<TContext, TOutput>(config): Agent<TContext, TOutput>
Parameter Type Description

config

{ handoffDescription: string; handoffOutputTypeWarningEnabled: boolean; handoffs: ( | Agent<any, any> | Handoff<any, TOutput>)[]; inputGuardrails: InputGuardrail[]; instructions: string | (runContext, agent) => string | Promise<string>; mcpServers: MCPServer[]; model: | string | Model; modelSettings: ModelSettings; name: string; outputGuardrails: OutputGuardrail<TOutput>[]; outputType: TOutput; resetToolChoice: boolean; tools: Tool<TContext>[]; toolUseBehavior: ToolUseBehavior; }

config.handoffDescription?

string

A description of the agent. This is used when the agent is used as a handoff, so that an LLM knows what it does and when to invoke it.

config.handoffOutputTypeWarningEnabled?

boolean

The warning log would be enabled when multiple output types by handoff agents are detected.

config.handoffs?

( | Agent<any, any> | Handoff<any, TOutput>)[]

Handoffs are sub-agents that the agent can delegate to. You can provide a list of handoffs, and the agent can choose to delegate to them if relevant. Allows for separation of concerns and modularity.

config.inputGuardrails?

InputGuardrail[]

A list of checks that run in parallel to the agent’s execution, before generating a response. Runs only if the agent is the first agent in the chain.

config.instructions?

string | (runContext, agent) => string | Promise<string>

The instructions for the agent. Will be used as the “system prompt” when this agent is invoked. Describes what the agent should do, and how it responds.

Can either be a string, or a function that dynamically generates instructions for the agent. If you provide a function, it will be called with the context and the agent instance. It must return a string.

config.mcpServers?

MCPServer[]

A list of Model Context Protocol servers the agent can use. Every time the agent runs, it will include tools from these servers in the list of available tools.

NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call server.connect() before passing it to the agent, and server.cleanup() when the server is no longer needed.

config.model?

| string | Model

The model implementation to use when invoking the LLM. By default, if not set, the agent will use the default model configured in modelSettings.defaultModel

config.modelSettings?

ModelSettings

Configures model-specific tuning parameters (e.g. temperature, top_p, etc.)

config.name

string

The name of the agent.

config.outputGuardrails?

OutputGuardrail<TOutput>[]

A list of checks that run on the final output of the agent, after generating a response. Runs only if the agent produces a final output.

config.outputType?

TOutput

The type of the output object. If not provided, the output will be a string.

config.resetToolChoice?

boolean

Wether to reset the tool choice to the default value after a tool has been called. Defaults to true. This ensures that the agent doesn’t enter an infinite loop of tool usage.

config.tools?

Tool<TContext>[]

A list of tools the agent can use.

config.toolUseBehavior?

ToolUseBehavior

This lets you configure how tool use is handled.

  • run_llm_again: The default behavior. Tools are run, and then the LLM receives the results and gets to respond.
  • stop_on_first_tool: The output of the frist tool call is used as the final output. This means that the LLM does not process the result of the tool call.
  • A list of tool names: The agent will stop running if any of the tools in the list are called. The final output will be the output of the first matching tool call. The LLM does not process the result of the tool call.
  • A function: if you pass a function, it will be called with the run context and the list of tool results. It must return a ToolsToFinalOutputResult, which determines whether the tool call resulted in a final output.

NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search, web search, etc. are always processed by the LLM

Agent<TContext, TOutput>

AgentHooks.constructor

handoffDescription: string;

A description of the agent. This is used when the agent is used as a handoff, so that an LLM knows what it does and when to invoke it.

AgentConfiguration.handoffDescription


handoffs: (
| Agent<any, TOutput>
| Handoff<any, TOutput>)[];

Handoffs are sub-agents that the agent can delegate to. You can provide a list of handoffs, and the agent can choose to delegate to them if relevant. Allows for separation of concerns and modularity.

AgentConfiguration.handoffs


inputGuardrails: InputGuardrail[];

A list of checks that run in parallel to the agent’s execution, before generating a response. Runs only if the agent is the first agent in the chain.

AgentConfiguration.inputGuardrails


instructions: string | (runContext, agent) => string | Promise<string>;

The instructions for the agent. Will be used as the “system prompt” when this agent is invoked. Describes what the agent should do, and how it responds.

Can either be a string, or a function that dynamically generates instructions for the agent. If you provide a function, it will be called with the context and the agent instance. It must return a string.

AgentConfiguration.instructions


mcpServers: MCPServer[];

A list of Model Context Protocol servers the agent can use. Every time the agent runs, it will include tools from these servers in the list of available tools.

NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call server.connect() before passing it to the agent, and server.cleanup() when the server is no longer needed.

AgentConfiguration.mcpServers


model:
| string
| Model;

The model implementation to use when invoking the LLM. By default, if not set, the agent will use the default model configured in modelSettings.defaultModel

AgentConfiguration.model


modelSettings: ModelSettings;

Configures model-specific tuning parameters (e.g. temperature, top_p, etc.)

AgentConfiguration.modelSettings


name: string;

The name of the agent.

AgentConfiguration.name


outputGuardrails: OutputGuardrail<AgentOutputType<unknown>>[];

A list of checks that run on the final output of the agent, after generating a response. Runs only if the agent produces a final output.

AgentConfiguration.outputGuardrails


outputType: TOutput;

The type of the output object. If not provided, the output will be a string.

AgentConfiguration.outputType


resetToolChoice: boolean;

Wether to reset the tool choice to the default value after a tool has been called. Defaults to true. This ensures that the agent doesn’t enter an infinite loop of tool usage.

AgentConfiguration.resetToolChoice


tools: Tool<TContext>[];

A list of tools the agent can use.

AgentConfiguration.tools


toolUseBehavior: ToolUseBehavior;

This lets you configure how tool use is handled.

  • run_llm_again: The default behavior. Tools are run, and then the LLM receives the results and gets to respond.
  • stop_on_first_tool: The output of the frist tool call is used as the final output. This means that the LLM does not process the result of the tool call.
  • A list of tool names: The agent will stop running if any of the tools in the list are called. The final output will be the output of the first matching tool call. The LLM does not process the result of the tool call.
  • A function: if you pass a function, it will be called with the run context and the list of tool results. It must return a ToolsToFinalOutputResult, which determines whether the tool call resulted in a final output.

NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search, web search, etc. are always processed by the LLM

AgentConfiguration.toolUseBehavior

get outputSchemaName(): string

Ouput schema name

string

asTool(options): FunctionTool

Transform this agent into a tool, callable by other agents.

This is different from handoffs in two ways:

  1. In handoffs, the new agent receives the conversation history. In this tool, the new agent receives generated input.
  2. In handoffs, the new agent takes over the conversation. In this tool, the new agent is called as a tool, and the conversation is continued by the original agent.
Parameter Type Description

options

{ customOutputExtractor: (output) => string | Promise<string>; toolDescription: string; toolName: string; }

Options for the tool.

options.customOutputExtractor?

(output) => string | Promise<string>

A function that extracts the output text from the agent. If not provided, the last message from the agent will be used.

options.toolDescription?

string

The description of the tool, which should indicate what the tool does and when to use it.

options.toolName?

string

The name of the tool. If not provided, the name of the agent will be used.

FunctionTool

A tool that runs the agent and returns the output text.


clone(config): Agent<TContext, TOutput>

Makes a copy of the agent, with the given arguments changed. For example, you could do:

const newAgent = agent.clone({ instructions: 'New instructions' })
Parameter Type Description

config

Partial<AgentConfiguration<TContext, TOutput>>

A partial configuration to change.

Agent<TContext, TOutput>

A new agent with the given changes.


emit<K>(type, ...args): boolean
Type Parameter

K extends keyof AgentHookEvents<TContext, TOutput>

Parameter Type

type

K

args

AgentHookEvents<TContext, TOutput>[K]

boolean

AgentHooks.emit


getAllTools(): Promise<Tool<TContext>[]>

ALl agent tools, including the MCPl and function tools.

Promise<Tool<TContext>[]>

all configured tools


getMcpTools(): Promise<Tool<TContext>[]>

Fetches the available tools from the MCP servers.

Promise<Tool<TContext>[]>

the MCP powered tools


getSystemPrompt(runContext): Promise<undefined | string>

Returns the system prompt for the agent.

If the agent has a function as its instructions, this function will be called with the runContext and the agent instance.

Parameter Type

runContext

RunContext<TContext>

Promise<undefined | string>


off<K>(type, listener): EventEmitter<AgentHookEvents<TContext, TOutput>>
Type Parameter

K extends keyof AgentHookEvents<TContext, TOutput>

Parameter Type

type

K

listener

(…args) => void

EventEmitter<AgentHookEvents<TContext, TOutput>>

AgentHooks.off


on<K>(type, listener): EventEmitter<AgentHookEvents<TContext, TOutput>>
Type Parameter

K extends keyof AgentHookEvents<TContext, TOutput>

Parameter Type

type

K

listener

(…args) => void

EventEmitter<AgentHookEvents<TContext, TOutput>>

AgentHooks.on


once<K>(type, listener): EventEmitter<AgentHookEvents<TContext, TOutput>>
Type Parameter

K extends keyof AgentHookEvents<TContext, TOutput>

Parameter Type

type

K

listener

(…args) => void

EventEmitter<AgentHookEvents<TContext, TOutput>>

AgentHooks.once


processFinalOutput(output): ResolvedAgentOutput<TOutput>

Processes the final output of the agent.

Parameter Type Description

output

string

The output of the agent.

ResolvedAgentOutput<TOutput>

The parsed out.


toJSON(): object

Returns a JSON representation of the agent, which is serializable.

object

A JSON object containing the agent’s name.

name: string;

static create<TOutput, Handoffs>(config): Agent<unknown, TOutput | HandoffsOutputUnion<Handoffs>>

Create an Agent with handoffs and automatically infer the union type for TOutput from the handoff agents’ output types.

Type Parameter Default type

TOutput extends AgentOutputType<unknown>

"text"

Handoffs extends readonly ( | Agent<any, any> | Handoff<any, any>)[]

[]

Parameter Type

config

AgentConfigWithHandoffs<TOutput, Handoffs>

Agent<unknown, TOutput | HandoffsOutputUnion<Handoffs>>