Tools
MCPToolApprovalFunction
module-attribute
MCPToolApprovalFunction = Callable[
[MCPToolApprovalRequest],
MaybeAwaitable[MCPToolApprovalFunctionResult],
]
A function that approves or rejects a tool call.
ShellApprovalFunction
module-attribute
ShellApprovalFunction = Callable[
[RunContextWrapper[Any], "ShellActionRequest", str],
MaybeAwaitable[bool],
]
A function that determines whether a shell action requires approval. Takes (run_context, action, call_id) and returns whether approval is needed.
ShellOnApprovalFunction
module-attribute
ShellOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ShellOnApprovalFunctionResult],
]
A function that auto-approves or rejects a shell tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
ApplyPatchApprovalFunction
module-attribute
ApplyPatchApprovalFunction = Callable[
[RunContextWrapper[Any], ApplyPatchOperation, str],
MaybeAwaitable[bool],
]
A function that determines whether an apply_patch operation requires approval. Takes (run_context, operation, call_id) and returns whether approval is needed.
ApplyPatchOnApprovalFunction
module-attribute
ApplyPatchOnApprovalFunction = Callable[
[RunContextWrapper[Any], "ToolApprovalItem"],
MaybeAwaitable[ApplyPatchOnApprovalFunctionResult],
]
A function that auto-approves or rejects an apply_patch tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.
LocalShellExecutor
module-attribute
LocalShellExecutor = Callable[
[LocalShellCommandRequest], MaybeAwaitable[str]
]
A function that executes a command on a shell.
ShellExecutor
module-attribute
ShellExecutor = Callable[
[ShellCommandRequest],
MaybeAwaitable[Union[str, ShellResult]],
]
Executes a shell command sequence and returns either text or structured output.
Tool
module-attribute
Tool = Union[
FunctionTool,
FileSearchTool,
WebSearchTool,
ComputerTool[Any],
HostedMCPTool,
ShellTool,
ApplyPatchTool,
LocalShellTool,
ImageGenerationTool,
CodeInterpreterTool,
]
A tool that can be used in an agent.
ToolOutputText
ToolOutputTextDict
ToolOutputImage
Bases: BaseModel
Represents a tool output that should be sent to the model as an image.
You can provide either an image_url (URL or data URL) or a file_id for previously uploaded
content. The optional detail can control vision detail.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> ToolOutputImage
Validate that at least one of image_url or file_id is provided.
Source code in src/agents/tool.py
ToolOutputImageDict
Bases: TypedDict
TypedDict variant for image tool outputs.
Source code in src/agents/tool.py
ToolOutputFileContent
Bases: BaseModel
Represents a tool output that should be sent to the model as a file.
Provide one of file_data (base64), file_url, or file_id. You may also
provide an optional filename when using file_data to hint file name.
Source code in src/agents/tool.py
check_at_least_one_required_field
check_at_least_one_required_field() -> (
ToolOutputFileContent
)
Validate that at least one of file_data, file_url, or file_id is provided.
Source code in src/agents/tool.py
ToolOutputFileContentDict
Bases: TypedDict
TypedDict variant for file content tool outputs.
Source code in src/agents/tool.py
ComputerCreate
Bases: Protocol[ComputerT_co]
Initializes a computer for the current run context.
Source code in src/agents/tool.py
ComputerDispose
Bases: Protocol[ComputerT_contra]
Cleans up a computer initialized for a run context.
Source code in src/agents/tool.py
ComputerProvider
dataclass
Bases: Generic[ComputerT]
Configures create/dispose hooks for per-run computer lifecycle management.
Source code in src/agents/tool.py
FunctionToolResult
dataclass
Source code in src/agents/tool.py
run_item
instance-attribute
run_item: RunItem | None
The run item that was produced as a result of the tool call.
This can be None when the tool run is interrupted and no output item should be emitted yet.
interruptions
class-attribute
instance-attribute
interruptions: list[ToolApprovalItem] = field(
default_factory=list
)
Interruptions from nested agent runs (for agent-as-tool).
FunctionTool
dataclass
A tool that wraps a function. In most cases, you should use the function_tool helpers to
create a FunctionTool, as they let you easily wrap a Python function.
Source code in src/agents/tool.py
name
instance-attribute
The name of the tool, as shown to the LLM. Generally the name of the function.
params_json_schema
instance-attribute
The JSON schema for the tool's parameters.
on_invoke_tool
instance-attribute
on_invoke_tool: Callable[
[ToolContext[Any], str], Awaitable[Any]
]
A function that invokes the tool with the given context and parameters. The params passed are: 1. The tool run context. 2. The arguments from the LLM, as a JSON string.
You must return a one of the structured tool output types (e.g. ToolOutputText, ToolOutputImage,
ToolOutputFileContent) or a string representation of the tool output, or a list of them,
or something we can call str() on.
In case of errors, you can either raise an Exception (which will cause the run to fail) or
return a string error message (which will be sent back to the LLM).
strict_json_schema
class-attribute
instance-attribute
Whether the JSON schema is in strict mode. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input.
is_enabled
class-attribute
instance-attribute
is_enabled: (
bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
]
) = True
Whether the tool is enabled. Either a bool or a Callable that takes the run context and agent and returns whether the tool is enabled. You can use this to dynamically enable/disable a tool based on your context/state.
tool_input_guardrails
class-attribute
instance-attribute
tool_input_guardrails: (
list[ToolInputGuardrail[Any]] | None
) = None
Optional list of input guardrails to run before invoking this tool.
tool_output_guardrails
class-attribute
instance-attribute
tool_output_guardrails: (
list[ToolOutputGuardrail[Any]] | None
) = None
Optional list of output guardrails to run after invoking this tool.
needs_approval
class-attribute
instance-attribute
needs_approval: (
bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
]
) = False
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval.
FileSearchTool
dataclass
A hosted tool that lets the LLM search through a vector store. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
vector_store_ids
instance-attribute
The IDs of the vector stores to search.
max_num_results
class-attribute
instance-attribute
The maximum number of results to return.
include_search_results
class-attribute
instance-attribute
Whether to include the search results in the output produced by the LLM.
ranking_options
class-attribute
instance-attribute
Ranking options for search.
WebSearchTool
dataclass
A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
user_location
class-attribute
instance-attribute
Optional location for the search. Lets you customize results to be relevant to a location.
filters
class-attribute
instance-attribute
A filter to apply based on file attributes.
ComputerTool
dataclass
Bases: Generic[ComputerT]
A hosted tool that lets the LLM control a computer.
Source code in src/agents/tool.py
computer
instance-attribute
The computer implementation, or a factory that produces a computer per run.
on_safety_check
class-attribute
instance-attribute
on_safety_check: (
Callable[
[ComputerToolSafetyCheckData], MaybeAwaitable[bool]
]
| None
) = None
Optional callback to acknowledge computer tool safety checks.
ComputerToolSafetyCheckData
dataclass
Information about a computer tool safety check.
Source code in src/agents/tool.py
MCPToolApprovalRequest
dataclass
A request to approve a tool call.
Source code in src/agents/tool.py
MCPToolApprovalFunctionResult
Bases: TypedDict
The result of an MCP tool approval function.
Source code in src/agents/tool.py
ShellOnApprovalFunctionResult
Bases: TypedDict
The result of a shell tool on_approval callback.
Source code in src/agents/tool.py
ApplyPatchOnApprovalFunctionResult
Bases: TypedDict
The result of an apply_patch tool on_approval callback.
Source code in src/agents/tool.py
HostedMCPTool
dataclass
A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and
call tools, without requiring a round trip back to your code.
If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible
environment, or you just prefer to run tool calls locally, then you can instead use the servers
in agents.mcp and pass Agent(mcp_servers=[...]) to the agent.
Source code in src/agents/tool.py
tool_config
instance-attribute
The MCP tool config, which includes the server URL and other settings.
on_approval_request
class-attribute
instance-attribute
on_approval_request: MCPToolApprovalFunction | None = None
An optional function that will be called if approval is requested for an MCP tool. If not
provided, you will need to manually add approvals/rejections to the input and call
Runner.run(...) again.
CodeInterpreterTool
dataclass
A tool that allows the LLM to execute code in a sandboxed environment.
Source code in src/agents/tool.py
ImageGenerationTool
dataclass
A tool that allows the LLM to generate images.
Source code in src/agents/tool.py
LocalShellCommandRequest
dataclass
A request to execute a command on a shell.
Source code in src/agents/tool.py
LocalShellTool
dataclass
A tool that allows the LLM to execute commands on a shell.
For more details, see: https://platform.openai.com/docs/guides/tools-local-shell
Source code in src/agents/tool.py
executor
instance-attribute
executor: LocalShellExecutor
A function that executes a command on a shell.
ShellCallOutcome
dataclass
ShellCommandOutput
dataclass
Structured output for a single shell command execution.
Source code in src/agents/tool.py
ShellResult
dataclass
ShellActionRequest
dataclass
ShellCallData
dataclass
Normalized shell call data provided to shell executors.
Source code in src/agents/tool.py
ShellCommandRequest
dataclass
ShellTool
dataclass
Next-generation shell tool. LocalShellTool will be deprecated in favor of this.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ShellApprovalFunction = False
Whether the shell tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, action, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ShellOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
ApplyPatchTool
dataclass
Hosted apply_patch tool. Lets the model request file mutations via unified diffs.
Source code in src/agents/tool.py
needs_approval
class-attribute
instance-attribute
needs_approval: bool | ApplyPatchApprovalFunction = False
Whether the apply_patch tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, operation, call_id) and returns whether this specific call needs approval.
on_approval
class-attribute
instance-attribute
on_approval: ApplyPatchOnApprovalFunction | None = None
Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.
resolve_computer
async
resolve_computer(
*,
tool: ComputerTool[Any],
run_context: RunContextWrapper[Any],
) -> ComputerLike
Resolve a computer for a given run context, initializing it if needed.
Source code in src/agents/tool.py
dispose_resolved_computers
async
dispose_resolved_computers(
*, run_context: RunContextWrapper[Any]
) -> None
Dispose any computer instances created for the provided run context.
Source code in src/agents/tool.py
default_tool_error_function
default_tool_error_function(
ctx: RunContextWrapper[Any], error: Exception
) -> str
The default tool error function, which just returns a generic error message.
Source code in src/agents/tool.py
function_tool
function_tool(
func: ToolFunction[...],
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
) -> FunctionTool
function_tool(
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
func: ToolFunction[...] | None = None,
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction
| None = default_tool_error_function,
strict_mode: bool = True,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase],
MaybeAwaitable[bool],
] = True,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
tool_input_guardrails: list[ToolInputGuardrail[Any]]
| None = None,
tool_output_guardrails: list[ToolOutputGuardrail[Any]]
| None = None,
) -> (
FunctionTool
| Callable[[ToolFunction[...]], FunctionTool]
)
Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docstring style is detected automatically, but you can override it.
If the function takes a RunContextWrapper as the first argument, it must match the
context type of the agent that uses the tool.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
ToolFunction[...] | None
|
The function to wrap. |
None
|
name_override
|
str | None
|
If provided, use this name for the tool instead of the function's name. |
None
|
description_override
|
str | None
|
If provided, use this description for the tool instead of the function's docstring. |
None
|
docstring_style
|
DocstringStyle | None
|
If provided, use this style for the tool's docstring. If not provided, we will attempt to auto-detect the style. |
None
|
use_docstring_info
|
bool
|
If True, use the function's docstring to populate the tool's description and argument descriptions. |
True
|
failure_error_function
|
ToolErrorFunction | None
|
If provided, use this function to generate an error message when the tool call fails. The error message is sent to the LLM. If you pass None, then no error message will be sent and instead an Exception will be raised. |
default_tool_error_function
|
strict_mode
|
bool
|
Whether to enable strict mode for the tool's JSON schema. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input. If False, it allows non-strict JSON schemas. For example, if a parameter has a default value, it will be optional, additional properties are allowed, etc. See here for more: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas |
True
|
is_enabled
|
bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]]
|
Whether the tool is enabled. Can be a bool or a callable that takes the run context and agent and returns whether the tool is enabled. Disabled tools are hidden from the LLM at runtime. |
True
|
needs_approval
|
bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]
|
Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval. |
False
|
tool_input_guardrails
|
list[ToolInputGuardrail[Any]] | None
|
Optional list of guardrails to run before invoking the tool. |
None
|
tool_output_guardrails
|
list[ToolOutputGuardrail[Any]] | None
|
Optional list of guardrails to run after the tool returns. |
None
|
Source code in src/agents/tool.py
806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 | |