Skip to content

Tools

MCPToolApprovalFunction module-attribute

MCPToolApprovalFunction = Callable[
    [MCPToolApprovalRequest],
    MaybeAwaitable[MCPToolApprovalFunctionResult],
]

A function that approves or rejects a tool call.

ShellApprovalFunction module-attribute

ShellApprovalFunction = Callable[
    [RunContextWrapper[Any], "ShellActionRequest", str],
    MaybeAwaitable[bool],
]

A function that determines whether a shell action requires approval. Takes (run_context, action, call_id) and returns whether approval is needed.

ShellOnApprovalFunction module-attribute

ShellOnApprovalFunction = Callable[
    [RunContextWrapper[Any], "ToolApprovalItem"],
    MaybeAwaitable[ShellOnApprovalFunctionResult],
]

A function that auto-approves or rejects a shell tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.

ApplyPatchApprovalFunction module-attribute

ApplyPatchApprovalFunction = Callable[
    [RunContextWrapper[Any], ApplyPatchOperation, str],
    MaybeAwaitable[bool],
]

A function that determines whether an apply_patch operation requires approval. Takes (run_context, operation, call_id) and returns whether approval is needed.

ApplyPatchOnApprovalFunction module-attribute

ApplyPatchOnApprovalFunction = Callable[
    [RunContextWrapper[Any], "ToolApprovalItem"],
    MaybeAwaitable[ApplyPatchOnApprovalFunctionResult],
]

A function that auto-approves or rejects an apply_patch tool call when approval is needed. Takes (run_context, approval_item) and returns approval decision.

LocalShellExecutor module-attribute

LocalShellExecutor = Callable[
    [LocalShellCommandRequest], MaybeAwaitable[str]
]

A function that executes a command on a shell.

ShellToolContainerSkill module-attribute

ShellToolContainerSkill = Union[
    ShellToolSkillReference, ShellToolInlineSkill
]

Container skill configuration.

ShellToolContainerNetworkPolicy module-attribute

Network policy configuration for hosted shell containers.

ShellToolHostedEnvironment module-attribute

Hosted shell environment variants.

ShellToolEnvironment module-attribute

ShellToolEnvironment = Union[
    ShellToolLocalEnvironment, ShellToolHostedEnvironment
]

All supported shell environments.

ShellExecutor module-attribute

ShellExecutor = Callable[
    [ShellCommandRequest],
    MaybeAwaitable[Union[str, ShellResult]],
]

Executes a shell command sequence and returns either text or structured output.

Tool module-attribute

A tool that can be used in an agent.

ToolOutputText

Bases: BaseModel

Represents a tool output that should be sent to the model as text.

Source code in src/agents/tool.py
class ToolOutputText(BaseModel):
    """Represents a tool output that should be sent to the model as text."""

    type: Literal["text"] = "text"
    text: str

ToolOutputTextDict

Bases: TypedDict

TypedDict variant for text tool outputs.

Source code in src/agents/tool.py
class ToolOutputTextDict(TypedDict, total=False):
    """TypedDict variant for text tool outputs."""

    type: Literal["text"]
    text: str

ToolOutputImage

Bases: BaseModel

Represents a tool output that should be sent to the model as an image.

You can provide either an image_url (URL or data URL) or a file_id for previously uploaded content. The optional detail can control vision detail.

Source code in src/agents/tool.py
class ToolOutputImage(BaseModel):
    """Represents a tool output that should be sent to the model as an image.

    You can provide either an `image_url` (URL or data URL) or a `file_id` for previously uploaded
    content. The optional `detail` can control vision detail.
    """

    type: Literal["image"] = "image"
    image_url: str | None = None
    file_id: str | None = None
    detail: Literal["low", "high", "auto"] | None = None

    @model_validator(mode="after")
    def check_at_least_one_required_field(self) -> ToolOutputImage:
        """Validate that at least one of image_url or file_id is provided."""
        if self.image_url is None and self.file_id is None:
            raise ValueError("At least one of image_url or file_id must be provided")
        return self

check_at_least_one_required_field

check_at_least_one_required_field() -> ToolOutputImage

Validate that at least one of image_url or file_id is provided.

Source code in src/agents/tool.py
@model_validator(mode="after")
def check_at_least_one_required_field(self) -> ToolOutputImage:
    """Validate that at least one of image_url or file_id is provided."""
    if self.image_url is None and self.file_id is None:
        raise ValueError("At least one of image_url or file_id must be provided")
    return self

ToolOutputImageDict

Bases: TypedDict

TypedDict variant for image tool outputs.

Source code in src/agents/tool.py
class ToolOutputImageDict(TypedDict, total=False):
    """TypedDict variant for image tool outputs."""

    type: Literal["image"]
    image_url: NotRequired[str]
    file_id: NotRequired[str]
    detail: NotRequired[Literal["low", "high", "auto"]]

ToolOutputFileContent

Bases: BaseModel

Represents a tool output that should be sent to the model as a file.

Provide one of file_data (base64), file_url, or file_id. You may also provide an optional filename when using file_data to hint file name.

Source code in src/agents/tool.py
class ToolOutputFileContent(BaseModel):
    """Represents a tool output that should be sent to the model as a file.

    Provide one of `file_data` (base64), `file_url`, or `file_id`. You may also
    provide an optional `filename` when using `file_data` to hint file name.
    """

    type: Literal["file"] = "file"
    file_data: str | None = None
    file_url: str | None = None
    file_id: str | None = None
    filename: str | None = None

    @model_validator(mode="after")
    def check_at_least_one_required_field(self) -> ToolOutputFileContent:
        """Validate that at least one of file_data, file_url, or file_id is provided."""
        if self.file_data is None and self.file_url is None and self.file_id is None:
            raise ValueError("At least one of file_data, file_url, or file_id must be provided")
        return self

check_at_least_one_required_field

check_at_least_one_required_field() -> (
    ToolOutputFileContent
)

Validate that at least one of file_data, file_url, or file_id is provided.

Source code in src/agents/tool.py
@model_validator(mode="after")
def check_at_least_one_required_field(self) -> ToolOutputFileContent:
    """Validate that at least one of file_data, file_url, or file_id is provided."""
    if self.file_data is None and self.file_url is None and self.file_id is None:
        raise ValueError("At least one of file_data, file_url, or file_id must be provided")
    return self

ToolOutputFileContentDict

Bases: TypedDict

TypedDict variant for file content tool outputs.

Source code in src/agents/tool.py
class ToolOutputFileContentDict(TypedDict, total=False):
    """TypedDict variant for file content tool outputs."""

    type: Literal["file"]
    file_data: NotRequired[str]
    file_url: NotRequired[str]
    file_id: NotRequired[str]
    filename: NotRequired[str]

ComputerCreate

Bases: Protocol[ComputerT_co]

Initializes a computer for the current run context.

Source code in src/agents/tool.py
class ComputerCreate(Protocol[ComputerT_co]):
    """Initializes a computer for the current run context."""

    def __call__(self, *, run_context: RunContextWrapper[Any]) -> MaybeAwaitable[ComputerT_co]: ...

ComputerDispose

Bases: Protocol[ComputerT_contra]

Cleans up a computer initialized for a run context.

Source code in src/agents/tool.py
class ComputerDispose(Protocol[ComputerT_contra]):
    """Cleans up a computer initialized for a run context."""

    def __call__(
        self,
        *,
        run_context: RunContextWrapper[Any],
        computer: ComputerT_contra,
    ) -> MaybeAwaitable[None]: ...

ComputerProvider dataclass

Bases: Generic[ComputerT]

Configures create/dispose hooks for per-run computer lifecycle management.

Source code in src/agents/tool.py
@dataclass
class ComputerProvider(Generic[ComputerT]):
    """Configures create/dispose hooks for per-run computer lifecycle management."""

    create: ComputerCreate[ComputerT]
    dispose: ComputerDispose[ComputerT] | None = None

FunctionToolResult dataclass

Source code in src/agents/tool.py
@dataclass
class FunctionToolResult:
    tool: FunctionTool
    """The tool that was run."""

    output: Any
    """The output of the tool."""

    run_item: RunItem | None
    """The run item that was produced as a result of the tool call.

    This can be None when the tool run is interrupted and no output item should be emitted yet.
    """

    interruptions: list[ToolApprovalItem] = field(default_factory=list)
    """Interruptions from nested agent runs (for agent-as-tool)."""

    agent_run_result: Any = None  # RunResult | None, but avoid circular import
    """Nested agent run result (for agent-as-tool)."""

tool instance-attribute

The tool that was run.

output instance-attribute

output: Any

The output of the tool.

run_item instance-attribute

run_item: RunItem | None

The run item that was produced as a result of the tool call.

This can be None when the tool run is interrupted and no output item should be emitted yet.

interruptions class-attribute instance-attribute

interruptions: list[ToolApprovalItem] = field(
    default_factory=list
)

Interruptions from nested agent runs (for agent-as-tool).

agent_run_result class-attribute instance-attribute

agent_run_result: Any = None

Nested agent run result (for agent-as-tool).

FunctionTool dataclass

A tool that wraps a function. In most cases, you should use the function_tool helpers to create a FunctionTool, as they let you easily wrap a Python function.

Source code in src/agents/tool.py
@dataclass
class FunctionTool:
    """A tool that wraps a function. In most cases, you should use  the `function_tool` helpers to
    create a FunctionTool, as they let you easily wrap a Python function.
    """

    name: str
    """The name of the tool, as shown to the LLM. Generally the name of the function."""

    description: str
    """A description of the tool, as shown to the LLM."""

    params_json_schema: dict[str, Any]
    """The JSON schema for the tool's parameters."""

    on_invoke_tool: Callable[[ToolContext[Any], str], Awaitable[Any]]
    """A function that invokes the tool with the given context and parameters. The params passed
    are:
    1. The tool run context.
    2. The arguments from the LLM, as a JSON string.

    You must return a one of the structured tool output types (e.g. ToolOutputText, ToolOutputImage,
    ToolOutputFileContent) or a string representation of the tool output, or a list of them,
    or something we can call `str()` on.
    In case of errors, you can either raise an Exception (which will cause the run to fail) or
    return a string error message (which will be sent back to the LLM).
    """

    strict_json_schema: bool = True
    """Whether the JSON schema is in strict mode. We **strongly** recommend setting this to True,
    as it increases the likelihood of correct JSON input."""

    is_enabled: bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]] = True
    """Whether the tool is enabled. Either a bool or a Callable that takes the run context and agent
    and returns whether the tool is enabled. You can use this to dynamically enable/disable a tool
    based on your context/state."""

    # Keep guardrail fields before needs_approval to preserve v0.7.0 positional
    # constructor compatibility for public FunctionTool callers.
    # Tool-specific guardrails.
    tool_input_guardrails: list[ToolInputGuardrail[Any]] | None = None
    """Optional list of input guardrails to run before invoking this tool."""

    tool_output_guardrails: list[ToolOutputGuardrail[Any]] | None = None
    """Optional list of output guardrails to run after invoking this tool."""

    needs_approval: (
        bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]
    ) = False
    """Whether the tool needs approval before execution. If True, the run will be interrupted
    and the tool call will need to be approved using RunState.approve() or rejected using
    RunState.reject() before continuing. Can be a bool (always/never needs approval) or a
    function that takes (run_context, tool_parameters, call_id) and returns whether this
    specific call needs approval."""

    # Keep timeout fields after needs_approval to preserve positional constructor compatibility.
    timeout_seconds: float | None = None
    """Optional timeout (seconds) for each tool invocation."""

    timeout_behavior: ToolTimeoutBehavior = "error_as_result"
    """How to handle timeout events.

    - "error_as_result": return a model-visible timeout error string.
    - "raise_exception": raise a ToolTimeoutError and fail the run.
    """

    timeout_error_function: ToolErrorFunction | None = None
    """Optional formatter for timeout errors when timeout_behavior is "error_as_result"."""

    _is_agent_tool: bool = field(default=False, init=False, repr=False)
    """Internal flag indicating if this tool is an agent-as-tool."""

    _is_codex_tool: bool = field(default=False, init=False, repr=False)
    """Internal flag indicating if this tool is a Codex tool wrapper."""

    _agent_instance: Any = field(default=None, init=False, repr=False)
    """Internal reference to the agent instance if this is an agent-as-tool."""

    def __post_init__(self):
        if self.strict_json_schema:
            self.params_json_schema = ensure_strict_json_schema(self.params_json_schema)
        _validate_function_tool_timeout_config(self)

name instance-attribute

name: str

The name of the tool, as shown to the LLM. Generally the name of the function.

description instance-attribute

description: str

A description of the tool, as shown to the LLM.

params_json_schema instance-attribute

params_json_schema: dict[str, Any]

The JSON schema for the tool's parameters.

on_invoke_tool instance-attribute

on_invoke_tool: Callable[
    [ToolContext[Any], str], Awaitable[Any]
]

A function that invokes the tool with the given context and parameters. The params passed are: 1. The tool run context. 2. The arguments from the LLM, as a JSON string.

You must return a one of the structured tool output types (e.g. ToolOutputText, ToolOutputImage, ToolOutputFileContent) or a string representation of the tool output, or a list of them, or something we can call str() on. In case of errors, you can either raise an Exception (which will cause the run to fail) or return a string error message (which will be sent back to the LLM).

strict_json_schema class-attribute instance-attribute

strict_json_schema: bool = True

Whether the JSON schema is in strict mode. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input.

is_enabled class-attribute instance-attribute

is_enabled: (
    bool
    | Callable[
        [RunContextWrapper[Any], AgentBase],
        MaybeAwaitable[bool],
    ]
) = True

Whether the tool is enabled. Either a bool or a Callable that takes the run context and agent and returns whether the tool is enabled. You can use this to dynamically enable/disable a tool based on your context/state.

tool_input_guardrails class-attribute instance-attribute

tool_input_guardrails: (
    list[ToolInputGuardrail[Any]] | None
) = None

Optional list of input guardrails to run before invoking this tool.

tool_output_guardrails class-attribute instance-attribute

tool_output_guardrails: (
    list[ToolOutputGuardrail[Any]] | None
) = None

Optional list of output guardrails to run after invoking this tool.

needs_approval class-attribute instance-attribute

needs_approval: (
    bool
    | Callable[
        [RunContextWrapper[Any], dict[str, Any], str],
        Awaitable[bool],
    ]
) = False

Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval.

timeout_seconds class-attribute instance-attribute

timeout_seconds: float | None = None

Optional timeout (seconds) for each tool invocation.

timeout_behavior class-attribute instance-attribute

timeout_behavior: ToolTimeoutBehavior = 'error_as_result'

How to handle timeout events.

  • "error_as_result": return a model-visible timeout error string.
  • "raise_exception": raise a ToolTimeoutError and fail the run.

timeout_error_function class-attribute instance-attribute

timeout_error_function: ToolErrorFunction | None = None

Optional formatter for timeout errors when timeout_behavior is "error_as_result".

FileSearchTool dataclass

A hosted tool that lets the LLM search through a vector store. Currently only supported with OpenAI models, using the Responses API.

Source code in src/agents/tool.py
@dataclass
class FileSearchTool:
    """A hosted tool that lets the LLM search through a vector store. Currently only supported with
    OpenAI models, using the Responses API.
    """

    vector_store_ids: list[str]
    """The IDs of the vector stores to search."""

    max_num_results: int | None = None
    """The maximum number of results to return."""

    include_search_results: bool = False
    """Whether to include the search results in the output produced by the LLM."""

    ranking_options: RankingOptions | None = None
    """Ranking options for search."""

    filters: Filters | None = None
    """A filter to apply based on file attributes."""

    @property
    def name(self):
        return "file_search"

vector_store_ids instance-attribute

vector_store_ids: list[str]

The IDs of the vector stores to search.

max_num_results class-attribute instance-attribute

max_num_results: int | None = None

The maximum number of results to return.

include_search_results class-attribute instance-attribute

include_search_results: bool = False

Whether to include the search results in the output produced by the LLM.

ranking_options class-attribute instance-attribute

ranking_options: RankingOptions | None = None

Ranking options for search.

filters class-attribute instance-attribute

filters: Filters | None = None

A filter to apply based on file attributes.

WebSearchTool dataclass

A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models, using the Responses API.

Source code in src/agents/tool.py
@dataclass
class WebSearchTool:
    """A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models,
    using the Responses API.
    """

    user_location: UserLocation | None = None
    """Optional location for the search. Lets you customize results to be relevant to a location."""

    filters: WebSearchToolFilters | None = None
    """A filter to apply based on file attributes."""

    search_context_size: Literal["low", "medium", "high"] = "medium"
    """The amount of context to use for the search."""

    @property
    def name(self):
        return "web_search"

user_location class-attribute instance-attribute

user_location: UserLocation | None = None

Optional location for the search. Lets you customize results to be relevant to a location.

filters class-attribute instance-attribute

filters: Filters | None = None

A filter to apply based on file attributes.

search_context_size class-attribute instance-attribute

search_context_size: Literal["low", "medium", "high"] = (
    "medium"
)

The amount of context to use for the search.

ComputerTool dataclass

Bases: Generic[ComputerT]

A hosted tool that lets the LLM control a computer.

Source code in src/agents/tool.py
@dataclass(eq=False)
class ComputerTool(Generic[ComputerT]):
    """A hosted tool that lets the LLM control a computer."""

    computer: ComputerConfig[ComputerT]
    """The computer implementation, or a factory that produces a computer per run."""

    on_safety_check: Callable[[ComputerToolSafetyCheckData], MaybeAwaitable[bool]] | None = None
    """Optional callback to acknowledge computer tool safety checks."""

    def __post_init__(self) -> None:
        _store_computer_initializer(self)

    @property
    def name(self):
        return "computer_use_preview"

computer instance-attribute

computer: ComputerConfig[ComputerT]

The computer implementation, or a factory that produces a computer per run.

on_safety_check class-attribute instance-attribute

on_safety_check: (
    Callable[
        [ComputerToolSafetyCheckData], MaybeAwaitable[bool]
    ]
    | None
) = None

Optional callback to acknowledge computer tool safety checks.

ComputerToolSafetyCheckData dataclass

Information about a computer tool safety check.

Source code in src/agents/tool.py
@dataclass
class ComputerToolSafetyCheckData:
    """Information about a computer tool safety check."""

    ctx_wrapper: RunContextWrapper[Any]
    """The run context."""

    agent: Agent[Any]
    """The agent performing the computer action."""

    tool_call: ResponseComputerToolCall
    """The computer tool call."""

    safety_check: PendingSafetyCheck
    """The pending safety check to acknowledge."""

ctx_wrapper instance-attribute

ctx_wrapper: RunContextWrapper[Any]

The run context.

agent instance-attribute

agent: Agent[Any]

The agent performing the computer action.

tool_call instance-attribute

tool_call: ResponseComputerToolCall

The computer tool call.

safety_check instance-attribute

safety_check: PendingSafetyCheck

The pending safety check to acknowledge.

MCPToolApprovalRequest dataclass

A request to approve a tool call.

Source code in src/agents/tool.py
@dataclass
class MCPToolApprovalRequest:
    """A request to approve a tool call."""

    ctx_wrapper: RunContextWrapper[Any]
    """The run context."""

    data: McpApprovalRequest
    """The data from the MCP tool approval request."""

ctx_wrapper instance-attribute

ctx_wrapper: RunContextWrapper[Any]

The run context.

data instance-attribute

data: McpApprovalRequest

The data from the MCP tool approval request.

MCPToolApprovalFunctionResult

Bases: TypedDict

The result of an MCP tool approval function.

Source code in src/agents/tool.py
class MCPToolApprovalFunctionResult(TypedDict):
    """The result of an MCP tool approval function."""

    approve: bool
    """Whether to approve the tool call."""

    reason: NotRequired[str]
    """An optional reason, if rejected."""

approve instance-attribute

approve: bool

Whether to approve the tool call.

reason instance-attribute

reason: NotRequired[str]

An optional reason, if rejected.

ShellOnApprovalFunctionResult

Bases: TypedDict

The result of a shell tool on_approval callback.

Source code in src/agents/tool.py
class ShellOnApprovalFunctionResult(TypedDict):
    """The result of a shell tool on_approval callback."""

    approve: bool
    """Whether to approve the tool call."""

    reason: NotRequired[str]
    """An optional reason, if rejected."""

approve instance-attribute

approve: bool

Whether to approve the tool call.

reason instance-attribute

reason: NotRequired[str]

An optional reason, if rejected.

ApplyPatchOnApprovalFunctionResult

Bases: TypedDict

The result of an apply_patch tool on_approval callback.

Source code in src/agents/tool.py
class ApplyPatchOnApprovalFunctionResult(TypedDict):
    """The result of an apply_patch tool on_approval callback."""

    approve: bool
    """Whether to approve the tool call."""

    reason: NotRequired[str]
    """An optional reason, if rejected."""

approve instance-attribute

approve: bool

Whether to approve the tool call.

reason instance-attribute

reason: NotRequired[str]

An optional reason, if rejected.

HostedMCPTool dataclass

A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and call tools, without requiring a round trip back to your code. If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible environment, or you just prefer to run tool calls locally, then you can instead use the servers in agents.mcp and pass Agent(mcp_servers=[...]) to the agent.

Source code in src/agents/tool.py
@dataclass
class HostedMCPTool:
    """A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and
    call tools, without requiring a round trip back to your code.
    If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible
    environment, or you just prefer to run tool calls locally, then you can instead use the servers
    in `agents.mcp` and pass `Agent(mcp_servers=[...])` to the agent."""

    tool_config: Mcp
    """The MCP tool config, which includes the server URL and other settings."""

    on_approval_request: MCPToolApprovalFunction | None = None
    """An optional function that will be called if approval is requested for an MCP tool. If not
    provided, you will need to manually add approvals/rejections to the input and call
    `Runner.run(...)` again."""

    @property
    def name(self):
        return "hosted_mcp"

tool_config instance-attribute

tool_config: Mcp

The MCP tool config, which includes the server URL and other settings.

on_approval_request class-attribute instance-attribute

on_approval_request: MCPToolApprovalFunction | None = None

An optional function that will be called if approval is requested for an MCP tool. If not provided, you will need to manually add approvals/rejections to the input and call Runner.run(...) again.

CodeInterpreterTool dataclass

A tool that allows the LLM to execute code in a sandboxed environment.

Source code in src/agents/tool.py
@dataclass
class CodeInterpreterTool:
    """A tool that allows the LLM to execute code in a sandboxed environment."""

    tool_config: CodeInterpreter
    """The tool config, which includes the container and other settings."""

    @property
    def name(self):
        return "code_interpreter"

tool_config instance-attribute

tool_config: CodeInterpreter

The tool config, which includes the container and other settings.

ImageGenerationTool dataclass

A tool that allows the LLM to generate images.

Source code in src/agents/tool.py
@dataclass
class ImageGenerationTool:
    """A tool that allows the LLM to generate images."""

    tool_config: ImageGeneration
    """The tool config, which image generation settings."""

    @property
    def name(self):
        return "image_generation"

tool_config instance-attribute

tool_config: ImageGeneration

The tool config, which image generation settings.

LocalShellCommandRequest dataclass

A request to execute a command on a shell.

Source code in src/agents/tool.py
@dataclass
class LocalShellCommandRequest:
    """A request to execute a command on a shell."""

    ctx_wrapper: RunContextWrapper[Any]
    """The run context."""

    data: LocalShellCall
    """The data from the local shell tool call."""

ctx_wrapper instance-attribute

ctx_wrapper: RunContextWrapper[Any]

The run context.

data instance-attribute

data: LocalShellCall

The data from the local shell tool call.

LocalShellTool dataclass

A tool that allows the LLM to execute commands on a shell.

For more details, see: https://platform.openai.com/docs/guides/tools-local-shell

Source code in src/agents/tool.py
@dataclass
class LocalShellTool:
    """A tool that allows the LLM to execute commands on a shell.

    For more details, see:
    https://platform.openai.com/docs/guides/tools-local-shell
    """

    executor: LocalShellExecutor
    """A function that executes a command on a shell."""

    @property
    def name(self):
        return "local_shell"

executor instance-attribute

A function that executes a command on a shell.

ShellToolLocalSkill

Bases: TypedDict

Skill metadata for local shell environments.

Source code in src/agents/tool.py
class ShellToolLocalSkill(TypedDict):
    """Skill metadata for local shell environments."""

    description: str
    name: str
    path: str

ShellToolSkillReference

Bases: TypedDict

Reference to a hosted shell skill.

Source code in src/agents/tool.py
class ShellToolSkillReference(TypedDict):
    """Reference to a hosted shell skill."""

    type: Literal["skill_reference"]
    skill_id: str
    version: NotRequired[str]

ShellToolInlineSkillSource

Bases: TypedDict

Inline skill source payload.

Source code in src/agents/tool.py
class ShellToolInlineSkillSource(TypedDict):
    """Inline skill source payload."""

    data: str
    media_type: Literal["application/zip"]
    type: Literal["base64"]

ShellToolInlineSkill

Bases: TypedDict

Inline hosted shell skill bundle.

Source code in src/agents/tool.py
class ShellToolInlineSkill(TypedDict):
    """Inline hosted shell skill bundle."""

    description: str
    name: str
    source: ShellToolInlineSkillSource
    type: Literal["inline"]

ShellToolContainerNetworkPolicyDomainSecret

Bases: TypedDict

A secret bound to a single domain in allowlist mode.

Source code in src/agents/tool.py
class ShellToolContainerNetworkPolicyDomainSecret(TypedDict):
    """A secret bound to a single domain in allowlist mode."""

    domain: str
    name: str
    value: str

ShellToolContainerNetworkPolicyAllowlist

Bases: TypedDict

Allowlist network policy for hosted containers.

Source code in src/agents/tool.py
class ShellToolContainerNetworkPolicyAllowlist(TypedDict):
    """Allowlist network policy for hosted containers."""

    allowed_domains: list[str]
    type: Literal["allowlist"]
    domain_secrets: NotRequired[list[ShellToolContainerNetworkPolicyDomainSecret]]

ShellToolContainerNetworkPolicyDisabled

Bases: TypedDict

Disabled network policy for hosted containers.

Source code in src/agents/tool.py
class ShellToolContainerNetworkPolicyDisabled(TypedDict):
    """Disabled network policy for hosted containers."""

    type: Literal["disabled"]

ShellToolLocalEnvironment

Bases: TypedDict

Local shell execution environment.

Source code in src/agents/tool.py
class ShellToolLocalEnvironment(TypedDict):
    """Local shell execution environment."""

    type: Literal["local"]
    skills: NotRequired[list[ShellToolLocalSkill]]

ShellToolContainerAutoEnvironment

Bases: TypedDict

Auto-provisioned hosted container environment.

Source code in src/agents/tool.py
class ShellToolContainerAutoEnvironment(TypedDict):
    """Auto-provisioned hosted container environment."""

    type: Literal["container_auto"]
    file_ids: NotRequired[list[str]]
    memory_limit: NotRequired[Literal["1g", "4g", "16g", "64g"] | None]
    network_policy: NotRequired[ShellToolContainerNetworkPolicy]
    skills: NotRequired[list[ShellToolContainerSkill]]

ShellToolContainerReferenceEnvironment

Bases: TypedDict

Reference to an existing hosted container.

Source code in src/agents/tool.py
class ShellToolContainerReferenceEnvironment(TypedDict):
    """Reference to an existing hosted container."""

    type: Literal["container_reference"]
    container_id: str

ShellCallOutcome dataclass

Describes the terminal condition of a shell command.

Source code in src/agents/tool.py
@dataclass
class ShellCallOutcome:
    """Describes the terminal condition of a shell command."""

    type: Literal["exit", "timeout"]
    exit_code: int | None = None

ShellCommandOutput dataclass

Structured output for a single shell command execution.

Source code in src/agents/tool.py
@dataclass
class ShellCommandOutput:
    """Structured output for a single shell command execution."""

    stdout: str = ""
    stderr: str = ""
    outcome: ShellCallOutcome = field(default_factory=lambda: ShellCallOutcome(type="exit"))
    command: str | None = None
    provider_data: dict[str, Any] | None = None

    @property
    def exit_code(self) -> int | None:
        return self.outcome.exit_code

    @property
    def status(self) -> Literal["completed", "timeout"]:
        return "timeout" if self.outcome.type == "timeout" else "completed"

ShellResult dataclass

Result returned by a shell executor.

Source code in src/agents/tool.py
@dataclass
class ShellResult:
    """Result returned by a shell executor."""

    output: list[ShellCommandOutput]
    max_output_length: int | None = None
    provider_data: dict[str, Any] | None = None

ShellActionRequest dataclass

Action payload for a next-generation shell call.

Source code in src/agents/tool.py
@dataclass
class ShellActionRequest:
    """Action payload for a next-generation shell call."""

    commands: list[str]
    timeout_ms: int | None = None
    max_output_length: int | None = None

ShellCallData dataclass

Normalized shell call data provided to shell executors.

Source code in src/agents/tool.py
@dataclass
class ShellCallData:
    """Normalized shell call data provided to shell executors."""

    call_id: str
    action: ShellActionRequest
    status: Literal["in_progress", "completed"] | None = None
    raw: Any | None = None

ShellCommandRequest dataclass

A request to execute a modern shell call.

Source code in src/agents/tool.py
@dataclass
class ShellCommandRequest:
    """A request to execute a modern shell call."""

    ctx_wrapper: RunContextWrapper[Any]
    data: ShellCallData

ShellTool dataclass

Next-generation shell tool. LocalShellTool will be deprecated in favor of this.

Source code in src/agents/tool.py
@dataclass
class ShellTool:
    """Next-generation shell tool. LocalShellTool will be deprecated in favor of this."""

    executor: ShellExecutor | None = None
    name: str = "shell"
    needs_approval: bool | ShellApprovalFunction = False
    """Whether the shell tool needs approval before execution. If True, the run will be interrupted
    and the tool call will need to be approved using RunState.approve() or rejected using
    RunState.reject() before continuing. Can be a bool (always/never needs approval) or a
    function that takes (run_context, action, call_id) and returns whether this specific call
    needs approval.
    """
    on_approval: ShellOnApprovalFunction | None = None
    """Optional handler to auto-approve or reject when approval is required.
    If provided, it will be invoked immediately when an approval is needed.
    """
    environment: ShellToolEnvironment | None = None
    """Execution environment for shell commands.

    If omitted, local mode is used.
    """

    def __post_init__(self) -> None:
        """Validate shell tool configuration and normalize environment fields."""
        normalized_environment = _normalize_shell_tool_environment(self.environment)
        self.environment = normalized_environment

        environment_type = normalized_environment["type"]
        if environment_type == "local":
            if self.executor is None:
                raise UserError("ShellTool with local environment requires an executor.")
            return

        if self.executor is not None:
            raise UserError("ShellTool with hosted environment does not accept an executor.")
        if self.needs_approval is not False or self.on_approval is not None:
            raise UserError(
                "ShellTool with hosted environment does not support needs_approval or on_approval."
            )
        self.needs_approval = False
        self.on_approval = None

    @property
    def type(self) -> str:
        return "shell"

needs_approval class-attribute instance-attribute

needs_approval: bool | ShellApprovalFunction = False

Whether the shell tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, action, call_id) and returns whether this specific call needs approval.

on_approval class-attribute instance-attribute

on_approval: ShellOnApprovalFunction | None = None

Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.

environment class-attribute instance-attribute

environment: ShellToolEnvironment | None = None

Execution environment for shell commands.

If omitted, local mode is used.

__post_init__

__post_init__() -> None

Validate shell tool configuration and normalize environment fields.

Source code in src/agents/tool.py
def __post_init__(self) -> None:
    """Validate shell tool configuration and normalize environment fields."""
    normalized_environment = _normalize_shell_tool_environment(self.environment)
    self.environment = normalized_environment

    environment_type = normalized_environment["type"]
    if environment_type == "local":
        if self.executor is None:
            raise UserError("ShellTool with local environment requires an executor.")
        return

    if self.executor is not None:
        raise UserError("ShellTool with hosted environment does not accept an executor.")
    if self.needs_approval is not False or self.on_approval is not None:
        raise UserError(
            "ShellTool with hosted environment does not support needs_approval or on_approval."
        )
    self.needs_approval = False
    self.on_approval = None

ApplyPatchTool dataclass

Hosted apply_patch tool. Lets the model request file mutations via unified diffs.

Source code in src/agents/tool.py
@dataclass
class ApplyPatchTool:
    """Hosted apply_patch tool. Lets the model request file mutations via unified diffs."""

    editor: ApplyPatchEditor
    name: str = "apply_patch"
    needs_approval: bool | ApplyPatchApprovalFunction = False
    """Whether the apply_patch tool needs approval before execution. If True, the run will be
    interrupted and the tool call will need to be approved using RunState.approve() or rejected
    using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a
    function that takes (run_context, operation, call_id) and returns whether this specific call
    needs approval.
    """
    on_approval: ApplyPatchOnApprovalFunction | None = None
    """Optional handler to auto-approve or reject when approval is required.
    If provided, it will be invoked immediately when an approval is needed.
    """

    @property
    def type(self) -> str:
        return "apply_patch"

needs_approval class-attribute instance-attribute

needs_approval: bool | ApplyPatchApprovalFunction = False

Whether the apply_patch tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, operation, call_id) and returns whether this specific call needs approval.

on_approval class-attribute instance-attribute

on_approval: ApplyPatchOnApprovalFunction | None = None

Optional handler to auto-approve or reject when approval is required. If provided, it will be invoked immediately when an approval is needed.

resolve_computer async

resolve_computer(
    *,
    tool: ComputerTool[Any],
    run_context: RunContextWrapper[Any],
) -> ComputerLike

Resolve a computer for a given run context, initializing it if needed.

Source code in src/agents/tool.py
async def resolve_computer(
    *, tool: ComputerTool[Any], run_context: RunContextWrapper[Any]
) -> ComputerLike:
    """Resolve a computer for a given run context, initializing it if needed."""
    per_context = _computer_cache.get(tool)
    if per_context is None:
        per_context = weakref.WeakKeyDictionary()
        _computer_cache[tool] = per_context

    cached = per_context.get(run_context)
    if cached is not None:
        _track_resolved_computer(tool=tool, run_context=run_context, resolved=cached)
        return cached.computer

    initializer_config = _get_computer_initializer(tool)
    lifecycle: ComputerProvider[Any] | None = (
        cast(ComputerProvider[Any], initializer_config)
        if _is_computer_provider(initializer_config)
        else None
    )
    initializer: ComputerCreate[Any] | None = None
    disposer: ComputerDispose[Any] | None = lifecycle.dispose if lifecycle else None

    if lifecycle is not None:
        initializer = lifecycle.create
    elif callable(initializer_config):
        initializer = initializer_config
    elif _is_computer_provider(tool.computer):
        lifecycle_provider = cast(ComputerProvider[Any], tool.computer)
        initializer = lifecycle_provider.create
        disposer = lifecycle_provider.dispose

    if initializer:
        computer_candidate = initializer(run_context=run_context)
        computer = (
            await computer_candidate
            if inspect.isawaitable(computer_candidate)
            else computer_candidate
        )
    else:
        computer = cast(ComputerLike, tool.computer)

    if not isinstance(computer, (Computer, AsyncComputer)):
        raise UserError("The computer tool did not provide a computer instance.")

    resolved = _ResolvedComputer(computer=computer, dispose=disposer)
    per_context[run_context] = resolved
    _track_resolved_computer(tool=tool, run_context=run_context, resolved=resolved)
    tool.computer = computer
    return computer

dispose_resolved_computers async

dispose_resolved_computers(
    *, run_context: RunContextWrapper[Any]
) -> None

Dispose any computer instances created for the provided run context.

Source code in src/agents/tool.py
async def dispose_resolved_computers(*, run_context: RunContextWrapper[Any]) -> None:
    """Dispose any computer instances created for the provided run context."""
    resolved_by_tool = _computers_by_run_context.pop(run_context, None)
    if not resolved_by_tool:
        return

    disposers: list[tuple[ComputerDispose[ComputerLike], ComputerLike]] = []

    for tool, _resolved in resolved_by_tool.items():
        per_context = _computer_cache.get(tool)
        if per_context is not None:
            per_context.pop(run_context, None)

        initializer = _get_computer_initializer(tool)
        if initializer is not None:
            tool.computer = initializer

        if _resolved.dispose is not None:
            disposers.append((_resolved.dispose, _resolved.computer))

    for dispose, computer in disposers:
        try:
            result = dispose(run_context=run_context, computer=computer)
            if inspect.isawaitable(result):
                await result
        except Exception as exc:
            logger.warning("Failed to dispose computer for run context: %s", exc)

default_tool_error_function

default_tool_error_function(
    ctx: RunContextWrapper[Any], error: Exception
) -> str

The default tool error function, which just returns a generic error message.

Source code in src/agents/tool.py
def default_tool_error_function(ctx: RunContextWrapper[Any], error: Exception) -> str:
    """The default tool error function, which just returns a generic error message."""
    json_decode_error = _extract_tool_argument_json_error(error)
    if json_decode_error is not None:
        return (
            "An error occurred while parsing tool arguments. "
            "Please try again with valid JSON. "
            f"Error: {json_decode_error}"
        )
    return f"An error occurred while running the tool. Please try again. Error: {str(error)}"

default_tool_timeout_error_message

default_tool_timeout_error_message(
    *, tool_name: str, timeout_seconds: float
) -> str

Build the default message returned to the model when a tool times out.

Source code in src/agents/tool.py
def default_tool_timeout_error_message(*, tool_name: str, timeout_seconds: float) -> str:
    """Build the default message returned to the model when a tool times out."""
    return f"Tool '{tool_name}' timed out after {timeout_seconds:g} seconds."

invoke_function_tool async

invoke_function_tool(
    *,
    function_tool: FunctionTool,
    context: ToolContext[Any],
    arguments: str,
) -> Any

Invoke a function tool, enforcing timeout configuration when provided.

Source code in src/agents/tool.py
async def invoke_function_tool(
    *,
    function_tool: FunctionTool,
    context: ToolContext[Any],
    arguments: str,
) -> Any:
    """Invoke a function tool, enforcing timeout configuration when provided."""
    timeout_seconds = function_tool.timeout_seconds
    if timeout_seconds is None:
        return await function_tool.on_invoke_tool(context, arguments)

    tool_task: asyncio.Future[Any] = asyncio.ensure_future(
        function_tool.on_invoke_tool(context, arguments)
    )
    try:
        return await asyncio.wait_for(tool_task, timeout=timeout_seconds)
    except asyncio.TimeoutError as exc:
        if tool_task.done() and not tool_task.cancelled():
            tool_exception = tool_task.exception()
            if tool_exception is None:
                return tool_task.result()
            raise tool_exception from None

        timeout_error = ToolTimeoutError(
            tool_name=function_tool.name,
            timeout_seconds=timeout_seconds,
        )
        if function_tool.timeout_behavior == "raise_exception":
            raise timeout_error from exc

        timeout_error_function = function_tool.timeout_error_function
        if timeout_error_function is None:
            return default_tool_timeout_error_message(
                tool_name=function_tool.name,
                timeout_seconds=timeout_seconds,
            )

        timeout_result = timeout_error_function(context, timeout_error)
        if inspect.isawaitable(timeout_result):
            return await timeout_result
        return timeout_result

function_tool

function_tool(
    func: ToolFunction[...],
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None = None,
    strict_mode: bool = True,
    is_enabled: bool
    | Callable[
        [RunContextWrapper[Any], AgentBase],
        MaybeAwaitable[bool],
    ] = True,
    needs_approval: bool
    | Callable[
        [RunContextWrapper[Any], dict[str, Any], str],
        Awaitable[bool],
    ] = False,
    tool_input_guardrails: list[ToolInputGuardrail[Any]]
    | None = None,
    tool_output_guardrails: list[ToolOutputGuardrail[Any]]
    | None = None,
    timeout: float | None = None,
    timeout_behavior: ToolTimeoutBehavior = "error_as_result",
    timeout_error_function: ToolErrorFunction | None = None,
) -> FunctionTool
function_tool(
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None = None,
    strict_mode: bool = True,
    is_enabled: bool
    | Callable[
        [RunContextWrapper[Any], AgentBase],
        MaybeAwaitable[bool],
    ] = True,
    needs_approval: bool
    | Callable[
        [RunContextWrapper[Any], dict[str, Any], str],
        Awaitable[bool],
    ] = False,
    tool_input_guardrails: list[ToolInputGuardrail[Any]]
    | None = None,
    tool_output_guardrails: list[ToolOutputGuardrail[Any]]
    | None = None,
    timeout: float | None = None,
    timeout_behavior: ToolTimeoutBehavior = "error_as_result",
    timeout_error_function: ToolErrorFunction | None = None,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
    func: ToolFunction[...] | None = None,
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction
    | None
    | object = _UNSET_FAILURE_ERROR_FUNCTION,
    strict_mode: bool = True,
    is_enabled: bool
    | Callable[
        [RunContextWrapper[Any], AgentBase],
        MaybeAwaitable[bool],
    ] = True,
    needs_approval: bool
    | Callable[
        [RunContextWrapper[Any], dict[str, Any], str],
        Awaitable[bool],
    ] = False,
    tool_input_guardrails: list[ToolInputGuardrail[Any]]
    | None = None,
    tool_output_guardrails: list[ToolOutputGuardrail[Any]]
    | None = None,
    timeout: float | None = None,
    timeout_behavior: ToolTimeoutBehavior = "error_as_result",
    timeout_error_function: ToolErrorFunction | None = None,
) -> (
    FunctionTool
    | Callable[[ToolFunction[...]], FunctionTool]
)

Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docstring style is detected automatically, but you can override it.

If the function takes a RunContextWrapper as the first argument, it must match the context type of the agent that uses the tool.

Parameters:

Name Type Description Default
func ToolFunction[...] | None

The function to wrap.

None
name_override str | None

If provided, use this name for the tool instead of the function's name.

None
description_override str | None

If provided, use this description for the tool instead of the function's docstring.

None
docstring_style DocstringStyle | None

If provided, use this style for the tool's docstring. If not provided, we will attempt to auto-detect the style.

None
use_docstring_info bool

If True, use the function's docstring to populate the tool's description and argument descriptions.

True
failure_error_function ToolErrorFunction | None | object

If provided, use this function to generate an error message when the tool call fails. The error message is sent to the LLM. If you pass None, then no error message will be sent and instead an Exception will be raised.

_UNSET_FAILURE_ERROR_FUNCTION
strict_mode bool

Whether to enable strict mode for the tool's JSON schema. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input. If False, it allows non-strict JSON schemas. For example, if a parameter has a default value, it will be optional, additional properties are allowed, etc. See here for more: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas

True
is_enabled bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]]

Whether the tool is enabled. Can be a bool or a callable that takes the run context and agent and returns whether the tool is enabled. Disabled tools are hidden from the LLM at runtime.

True
needs_approval bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]

Whether the tool needs approval before execution. If True, the run will be interrupted and the tool call will need to be approved using RunState.approve() or rejected using RunState.reject() before continuing. Can be a bool (always/never needs approval) or a function that takes (run_context, tool_parameters, call_id) and returns whether this specific call needs approval.

False
tool_input_guardrails list[ToolInputGuardrail[Any]] | None

Optional list of guardrails to run before invoking the tool.

None
tool_output_guardrails list[ToolOutputGuardrail[Any]] | None

Optional list of guardrails to run after the tool returns.

None
timeout float | None

Optional timeout in seconds for each tool call.

None
timeout_behavior ToolTimeoutBehavior

Timeout handling mode. "error_as_result" returns a model-visible message, while "raise_exception" raises ToolTimeoutError and fails the run.

'error_as_result'
timeout_error_function ToolErrorFunction | None

Optional formatter used for timeout messages when timeout_behavior="error_as_result".

None
Source code in src/agents/tool.py
def function_tool(
    func: ToolFunction[...] | None = None,
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None | object = _UNSET_FAILURE_ERROR_FUNCTION,
    strict_mode: bool = True,
    is_enabled: bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]] = True,
    needs_approval: bool
    | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]] = False,
    tool_input_guardrails: list[ToolInputGuardrail[Any]] | None = None,
    tool_output_guardrails: list[ToolOutputGuardrail[Any]] | None = None,
    timeout: float | None = None,
    timeout_behavior: ToolTimeoutBehavior = "error_as_result",
    timeout_error_function: ToolErrorFunction | None = None,
) -> FunctionTool | Callable[[ToolFunction[...]], FunctionTool]:
    """
    Decorator to create a FunctionTool from a function. By default, we will:
    1. Parse the function signature to create a JSON schema for the tool's parameters.
    2. Use the function's docstring to populate the tool's description.
    3. Use the function's docstring to populate argument descriptions.
    The docstring style is detected automatically, but you can override it.

    If the function takes a `RunContextWrapper` as the first argument, it *must* match the
    context type of the agent that uses the tool.

    Args:
        func: The function to wrap.
        name_override: If provided, use this name for the tool instead of the function's name.
        description_override: If provided, use this description for the tool instead of the
            function's docstring.
        docstring_style: If provided, use this style for the tool's docstring. If not provided,
            we will attempt to auto-detect the style.
        use_docstring_info: If True, use the function's docstring to populate the tool's
            description and argument descriptions.
        failure_error_function: If provided, use this function to generate an error message when
            the tool call fails. The error message is sent to the LLM. If you pass None, then no
            error message will be sent and instead an Exception will be raised.
        strict_mode: Whether to enable strict mode for the tool's JSON schema. We *strongly*
            recommend setting this to True, as it increases the likelihood of correct JSON input.
            If False, it allows non-strict JSON schemas. For example, if a parameter has a default
            value, it will be optional, additional properties are allowed, etc. See here for more:
            https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas
        is_enabled: Whether the tool is enabled. Can be a bool or a callable that takes the run
            context and agent and returns whether the tool is enabled. Disabled tools are hidden
            from the LLM at runtime.
        needs_approval: Whether the tool needs approval before execution. If True, the run will
            be interrupted and the tool call will need to be approved using RunState.approve() or
            rejected using RunState.reject() before continuing. Can be a bool (always/never needs
            approval) or a function that takes (run_context, tool_parameters, call_id) and returns
            whether this specific call needs approval.
        tool_input_guardrails: Optional list of guardrails to run before invoking the tool.
        tool_output_guardrails: Optional list of guardrails to run after the tool returns.
        timeout: Optional timeout in seconds for each tool call.
        timeout_behavior: Timeout handling mode. "error_as_result" returns a model-visible message,
            while "raise_exception" raises ToolTimeoutError and fails the run.
        timeout_error_function: Optional formatter used for timeout messages when
            timeout_behavior="error_as_result".
    """

    def _create_function_tool(the_func: ToolFunction[...]) -> FunctionTool:
        is_sync_function_tool = not inspect.iscoroutinefunction(the_func)
        schema = function_schema(
            func=the_func,
            name_override=name_override,
            description_override=description_override,
            docstring_style=docstring_style,
            use_docstring_info=use_docstring_info,
            strict_json_schema=strict_mode,
        )

        async def _on_invoke_tool_impl(ctx: ToolContext[Any], input: str) -> Any:
            try:
                json_data: dict[str, Any] = json.loads(input) if input else {}
            except Exception as e:
                if _debug.DONT_LOG_TOOL_DATA:
                    logger.debug(f"Invalid JSON input for tool {schema.name}")
                else:
                    logger.debug(f"Invalid JSON input for tool {schema.name}: {input}")
                raise ModelBehaviorError(
                    f"Invalid JSON input for tool {schema.name}: {input}"
                ) from e

            if _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Invoking tool {schema.name}")
            else:
                logger.debug(f"Invoking tool {schema.name} with input {input}")

            try:
                parsed = (
                    schema.params_pydantic_model(**json_data)
                    if json_data
                    else schema.params_pydantic_model()
                )
            except ValidationError as e:
                raise ModelBehaviorError(f"Invalid JSON input for tool {schema.name}: {e}") from e

            args, kwargs_dict = schema.to_call_args(parsed)

            if not _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Tool call args: {args}, kwargs: {kwargs_dict}")

            if not is_sync_function_tool:
                if schema.takes_context:
                    result = await the_func(ctx, *args, **kwargs_dict)
                else:
                    result = await the_func(*args, **kwargs_dict)
            else:
                if schema.takes_context:
                    result = await asyncio.to_thread(the_func, ctx, *args, **kwargs_dict)
                else:
                    result = await asyncio.to_thread(the_func, *args, **kwargs_dict)

            if _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Tool {schema.name} completed.")
            else:
                logger.debug(f"Tool {schema.name} returned {result}")

            return result

        async def _on_invoke_tool(ctx: ToolContext[Any], input: str) -> Any:
            try:
                return await _on_invoke_tool_impl(ctx, input)
            except Exception as e:
                resolved_failure_error_function: ToolErrorFunction | None
                if failure_error_function is _UNSET_FAILURE_ERROR_FUNCTION:
                    resolved_failure_error_function = default_tool_error_function
                else:
                    resolved_failure_error_function = cast(
                        Optional[ToolErrorFunction], failure_error_function
                    )

                if resolved_failure_error_function is None:
                    raise

                result = resolved_failure_error_function(ctx, e)
                if inspect.isawaitable(result):
                    return await result

                json_decode_error = _extract_tool_argument_json_error(e)
                if json_decode_error is not None:
                    span_error_message = "Error running tool"
                    span_error_detail = str(json_decode_error)
                else:
                    span_error_message = "Error running tool (non-fatal)"
                    span_error_detail = str(e)

                _error_tracing.attach_error_to_current_span(
                    SpanError(
                        message=span_error_message,
                        data={
                            "tool_name": schema.name,
                            "error": span_error_detail,
                        },
                    )
                )
                if _debug.DONT_LOG_TOOL_DATA:
                    logger.debug(f"Tool {schema.name} failed")
                else:
                    logger.error(
                        f"Tool {schema.name} failed: {input} {e}",
                        exc_info=e,
                    )
                return result

        if is_sync_function_tool:
            setattr(_on_invoke_tool, _SYNC_FUNCTION_TOOL_MARKER, True)

        return FunctionTool(
            name=schema.name,
            description=schema.description or "",
            params_json_schema=schema.params_json_schema,
            on_invoke_tool=_on_invoke_tool,
            strict_json_schema=strict_mode,
            is_enabled=is_enabled,
            needs_approval=needs_approval,
            tool_input_guardrails=tool_input_guardrails,
            tool_output_guardrails=tool_output_guardrails,
            timeout_seconds=timeout,
            timeout_behavior=timeout_behavior,
            timeout_error_function=timeout_error_function,
        )

    # If func is actually a callable, we were used as @function_tool with no parentheses
    if callable(func):
        return _create_function_tool(func)

    # Otherwise, we were used as @function_tool(...), so return a decorator
    def decorator(real_func: ToolFunction[...]) -> FunctionTool:
        return _create_function_tool(real_func)

    return decorator