Tools
MCPToolApprovalFunction
module-attribute
MCPToolApprovalFunction = Callable[
[MCPToolApprovalRequest],
MaybeAwaitable[MCPToolApprovalFunctionResult],
]
A function that approves or rejects a tool call.
ShellApprovalFunction
module-attribute
ShellApprovalFunction = Callable[
[RunContextWrapper[Any], "ShellActionRequest", str],
MaybeAwaitable[bool],
]
A function that determines whether a shell action requires approval. Takes (run_context, action, call_id) and returns whether approval is needed.
ShellOnApprovalFunction
module-attribute
ShellOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ShellOnApprovalFunctionResult],
]
A function that auto-approves or rejects a shell tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
ApplyPatchApprovalFunction
module-attribute
ApplyPatchApprovalFunction = Callable[
[RunContextWrapper[Any], ApplyPatchOperation, str],
MaybeAwaitable[bool],
]
A function that determines whether an apply_patch operation requires approval. Takes (run_context, operation, call_id) and returns whether approval is needed.
ApplyPatchOnApprovalFunction
module-attribute
ApplyPatchOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ApplyPatchOnApprovalFunctionResult],
]
A function that auto-approves or rejects an apply_patch tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
LocalShellExecutor
module-attribute
LocalShellExecutor = Callable[
[LocalShellCommandRequest], MaybeAwaitable[str]
]
A function that executes a command on a shell.
ShellToolContainerSkill
module-attribute
ShellToolContainerSkill = Union[
ShellToolSkillReference, ShellToolInlineSkill
]
Container skill configuration.
ShellToolContainerNetworkPolicy
module-attribute
ShellToolContainerNetworkPolicy = Union[
ShellToolContainerNetworkPolicyAllowlist,
ShellToolContainerNetworkPolicyDisabled,
]
Network policy configuration for hosted shell containers.
ShellToolHostedEnvironment
module-attribute
ShellToolHostedEnvironment = Union[
ShellToolContainerAutoEnvironment,
ShellToolContainerReferenceEnvironment,
]
Hosted shell environment variants.
ShellToolEnvironment
module-attribute
ShellToolEnvironment = Union[
ShellToolLocalEnvironment, ShellToolHostedEnvironment
]
All supported shell environments.
ShellExecutor
module-attribute
ShellExecutor = Callable[
[ShellCommandRequest],
MaybeAwaitable[Union[str, ShellResult]],
]
Executes a shell command sequence and returns either text or structured output.
Tool
module-attribute
Tool = Union[
FunctionTool,
FileSearchTool,
WebSearchTool,
ComputerTool[Any],
HostedMCPTool,
ShellTool,
ApplyPatchTool,
LocalShellTool,
ImageGenerationTool,
CodeInterpreterTool,
]
A tool that can be used in an agent.
ToolOutputText
ToolOutputTextDict
ToolOutputImage
Bases: BaseModel
Represents a tool output that should be sent to the model as an image.
You can provide either an image_url (URL or data URL) or a file_id for previously uploaded
content. The optional detail can control vision detail.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> ToolOutputImage
Validate that at least one of image_url or file_id is provided.
Source code in src/agents/tool.py
ToolOutputImageDict
Bases: TypedDict
TypedDict variant for image tool outputs.
Source code in src/agents/tool.py
ToolOutputFileContent
Bases: BaseModel
Represents a tool output that should be sent to the model as a file.
Provide one of file_data (base64), file_url, or file_id. You may also
provide an optional filename when using file_data to hint file name.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> (
ToolOutputFileContent
)
Validate that at least one of file_data, file_url, or file_id is provided.
Source code in src/agents/tool.py
ToolOutputFileContentDict
Bases: TypedDict
TypedDict variant for file content tool outputs.
Source code in src/agents/tool.py
ComputerCreate
Bases: Protocol[ComputerT_co]
Initializes a computer for the current run context.
Source code in src/agents/tool.py
ComputerDispose
Bases: Protocol[ComputerT_contra]
Cleans up a computer initialized for a run context.
Source code in src/agents/tool.py
ComputerProvider
dataclass
Bases: Generic[ComputerT]
Configures create/dispose hooks for per-run computer lifecycle management.
Source code in src/agents/tool.py
FunctionToolResult
dataclass
Source code in src/agents/tool.py
run_item
instance-attribute
run_item: RunItem | None
The run item that was produced as a result of the tool call.
This can be None when the tool run is interrupted and no output item should be emitted yet.
interruptions
class-attribute
instance-attribute
interruptions: list[ToolApprovalItem] = field(
default_factory=list
)
Interruptions from nested agent runs (for agent-as-tool).
FunctionTool
dataclass
A tool that wraps a function. In most cases, you should use the function_tool helpers to
create a FunctionTool, as they let you easily wrap a Python function.
Source code in src/agents/tool.py
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 | |
name
instance-attribute
The name of the tool, as shown to the LLM. Generally the name of the function.
params_json_schema
instance-attribute
The JSON schema for the tool's parameters.
on_invoke_tool
instance-attribute
on_invoke_tool: Callable[
[ToolContext[Any], str], Awaitable[Any]
]
A function that invokes the tool with the given context and parameters. The params passed are: 1. The tool run context. 2. The arguments from the LLM, as a JSON string.
You must return a one of the structured tool output types (e.g. ToolOutputText, ToolOutputImage,
ToolOutputFileContent) or a string representation of the tool output, or a list of them,
or something we can call str() on.
In case of errors, you can either raise an Exception (which will cause the run to fail) or
return a string error message (which will be sent back to the LLM).
strict_json_schema
class-attribute
instance-attribute
Whether the JSON schema is in strict mode. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input.
is_enabled
class-attribute
instance-attribute
is_enabled: (
bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
]
) = True
Whether the tool is enabled. Either a bool or a Callable that takes the run context and agent and returns whether the tool is enabled. You can use this to dynamically enable/disable a tool based on your context/state.
tool_input_guardrails
class-attribute
instance-attribute
tool_input_guardrails: (
list[ToolInputGuardrail[Any]] | None
) = None
Optional list of input guardrails to run before invoking this tool.
tool_output_guardrails
class-attribute
instance-attribute
tool_output_guardrails: (
list[ToolOutputGuardrail[Any]] | None
) = None
Optional list of output guardrails to run after invoking this tool.
needs_approval
class-attribute
instance-attribute
needs_approval: (
bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
]
) = False
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval.
timeout_seconds
class-attribute
instance-attribute
Optional timeout (seconds) for each tool invocation.
timeout_behavior
class-attribute
instance-attribute
How to handle timeout events.
- "error_as_result": return a model-visible timeout error string.
- "raise_exception": raise a ToolTimeoutError and fail the run.
FileSearchTool
dataclass
A hosted tool that lets the LLM search through a vector store. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
vector_store_ids
instance-attribute
The IDs of the vector stores to search.
max_num_results
class-attribute
instance-attribute
The maximum number of results to return.
include_search_results
class-attribute
instance-attribute
Whether to include the search results in the output produced by the LLM.
ranking_options
class-attribute
instance-attribute
Ranking options for search.
WebSearchTool
dataclass
A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
user_location
class-attribute
instance-attribute
Optional location for the search. Lets you customize results to be relevant to a location.
filters
class-attribute
instance-attribute
A filter to apply based on file attributes.
ComputerTool
dataclass
Bases: Generic[ComputerT]
A hosted tool that lets the LLM control a computer.
Source code in src/agents/tool.py
computer
instance-attribute
The computer implementation, or a factory that produces a computer per run.
on_safety_check
class-attribute
instance-attribute
on_safety_check: (
Callable[
[ComputerToolSafetyCheckData], MaybeAwaitable[bool]
]
| None
) = None
Optional callback to acknowledge computer tool safety checks.
ComputerToolSafetyCheckData
dataclass
Information about a computer tool safety check.
Source code in src/agents/tool.py
MCPToolApprovalRequest
dataclass
A request to approve a tool call.
Source code in src/agents/tool.py
MCPToolApprovalFunctionResult
Bases: TypedDict
The result of an MCP tool approval function.
Source code in src/agents/tool.py
ShellOnApprovalFunctionResult
Bases: TypedDict
The result of a shell tool on_approval callback.
Source code in src/agents/tool.py
ApplyPatchOnApprovalFunctionResult
Bases: TypedDict
The result of an apply_patch tool on_approval callback.
Source code in src/agents/tool.py
HostedMCPTool
dataclass
A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and
call tools, without requiring a round trip back to your code.
If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible
environment, or you just prefer to run tool calls locally, then you can instead use the servers
in agents.mcp and pass Agent(mcp_servers=[...]) to the agent.
Source code in src/agents/tool.py
tool_config
instance-attribute
The MCP tool config, which includes the server URL and other settings.
on_approval_request
class-attribute
instance-attribute
on_approval_request: MCPToolApprovalFunction | None = None
An optional function that will be called if approval is requested for an MCP tool. If not
provided, you will need to manually add approvals/rejections to the input and call
Runner.run(...) again.
CodeInterpreterTool
dataclass
A tool that allows the LLM to execute code in a sandboxed environment.
Source code in src/agents/tool.py
ImageGenerationTool
dataclass
A tool that allows the LLM to generate images.
Source code in src/agents/tool.py
LocalShellCommandRequest
dataclass
A request to execute a command on a shell.
Source code in src/agents/tool.py
LocalShellTool
dataclass
A tool that allows the LLM to execute commands on a shell.
For more details, see: https://platform.openai.com/docs/guides/tools-local-shell
Source code in src/agents/tool.py
executor
instance-attribute
executor: LocalShellExecutor
A function that executes a command on a shell.
ShellToolLocalSkill
ShellToolSkillReference
ShellToolInlineSkillSource
ShellToolInlineSkill
ShellToolContainerNetworkPolicyDomainSecret
ShellToolContainerNetworkPolicyAllowlist
Bases: TypedDict
Allowlist network policy for hosted containers.
Source code in src/agents/tool.py
ShellToolContainerNetworkPolicyDisabled
ShellToolLocalEnvironment
ShellToolContainerAutoEnvironment
Bases: TypedDict
Auto-provisioned hosted container environment.
Source code in src/agents/tool.py
ShellToolContainerReferenceEnvironment
ShellCallOutcome
dataclass
ShellCommandOutput
dataclass
Structured output for a single shell command execution.
Source code in src/agents/tool.py
ShellResult
dataclass
ShellActionRequest
dataclass
ShellCallData
dataclass
Normalized shell call data provided to shell executors.
Source code in src/agents/tool.py
ShellCommandRequest
dataclass
ShellTool
dataclass
Next-generation shell tool. LocalShellTool will be deprecated in favor of this.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ShellApprovalFunction = False
Whether the shell tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, action, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ShellOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
environment
class-attribute
instance-attribute
environment: ShellToolEnvironment | None = None
Execution environment for shell commands.
If omitted, local mode is used.
__post_init__
Validate shell tool configuration and normalize environment fields.
Source code in src/agents/tool.py
ApplyPatchTool
dataclass
Hosted apply_patch tool. Lets the model request file mutations via unified diffs.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ApplyPatchApprovalFunction = False
Whether the apply_patch tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, operation, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ApplyPatchOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
resolve_computer
async
resolve_computer(
*,
tool: ComputerTool[Any],
run_context: RunContextWrapper[Any],
) -> ComputerLike
Resolve a computer for a given run context, initializing it if needed.
Source code in src/agents/tool.py
dispose_resolved_computers
async
dispose_resolved_computers(
*, run_context: RunContextWrapper[Any]
) -> None
Dispose any computer instances created for the provided run context.
Source code in src/agents/tool.py
default_tool_error_function
default_tool_error_function(
ctx: RunContextWrapper[Any], error: Exception
) -> str
The default tool error function, which just returns a generic error message.
Source code in src/agents/tool.py
default_tool_timeout_error_message
Build the default message returned to the model when a tool times out.
invoke_function_tool
async
invoke_function_tool(
*,
function_tool: FunctionTool,
context: ToolContext[Any],
arguments: str,
) -> Any
Invoke a function tool, enforcing timeout configuration when provided.
Source code in src/agents/tool.py
function_tool
function_tool(
func: ToolFunction[...],
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
) -> FunctionTool
function_tool(
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
func: ToolFunction[...] | None = None,
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction
| None
| object = _UNSET_FAILURE_ERROR_FUNCTION,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
timeout: float | None = None,
timeout_behavior: ToolTimeoutBehavior = "error_as_result",
timeout_error_function: ToolErrorFunction | None = None,
) -> (
FunctionTool
| Callable[[ToolFunction[...]], FunctionTool]
)
Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docstring style is detected automatically, but you can override it.
If the function takes a RunContextWrapper as the first argument, it must match the
context type of the agent that uses the tool.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
ToolFunction[...] | None
|
The function to wrap. |
None
|
name_override
|
str | None
|
If provided, use this name for the tool instead of the function's name. |
None
|
description_override
|
str | None
|
If provided, use this description for the tool instead of the function's docstring. |
None
|
docstring_style
|
DocstringStyle | None
|
If provided, use this style for the tool's docstring. If not provided, we will attempt to auto-detect the style. |
None
|
use_docstring_info
|
bool
|
If True, use the function's docstring to populate the tool's description and argument descriptions. |
True
|
failure_error_function
|
ToolErrorFunction | None | object
|
If provided, use this function to generate an error message when the tool call fails. The error message is sent to the LLM. If you pass None, then no error message will be sent and instead an Exception will be raised. |
_UNSET_FAILURE_ERROR_FUNCTION
|
strict_mode
|
bool
|
Whether to enable strict mode for the tool's JSON schema. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input. If False, it allows non-strict JSON schemas. For example, if a parameter has a default value, it will be optional, additional properties are allowed, etc. See here for more: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas |
True
|
is_enabled
|
bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]]
|
Whether the tool is enabled. Can be a bool or a callable that takes the run context and agent and returns whether the tool is enabled. Disabled tools are hidden from the LLM at runtime. |
True
|
needs_approval
|
bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]
|
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval. |
False
|
tool_input_guardrails
|
list[ToolInputGuardrail[Any]] | None
|
Optional list of guardrails to run before invoking the tool. |
None
|
tool_output_guardrails
|
list[ToolOutputGuardrail[Any]] | None
|
Optional list of guardrails to run after the tool returns. |
None
|
timeout
|
float | None
|
Optional timeout in seconds for each tool call. |
None
|
timeout_behavior
|
ToolTimeoutBehavior
|
Timeout handling mode. "error_as_result" returns a model-visible message, while "raise_exception" raises ToolTimeoutError and fails the run. |
'error_as_result'
|
timeout_error_function
|
ToolErrorFunction | None
|
Optional formatter used for timeout messages when timeout_behavior="error_as_result". |
None
|
Source code in src/agents/tool.py
1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 | |