Tools
MCPToolApprovalFunction
module-attribute
MCPToolApprovalFunction = Callable[
[MCPToolApprovalRequest],
MaybeAwaitable[MCPToolApprovalFunctionResult],
]
A function that approves or rejects a tool call.
LocalShellExecutor
module-attribute
LocalShellExecutor = Callable[
[LocalShellCommandRequest], MaybeAwaitable[str]
]
A function that executes a command on a shell.
Tool
module-attribute
Tool = Union[
FunctionTool,
FileSearchTool,
WebSearchTool,
ComputerTool,
HostedMCPTool,
LocalShellTool,
ImageGenerationTool,
CodeInterpreterTool,
]
A tool that can be used in an agent.
FunctionToolResult
dataclass
Source code in src/agents/tool.py
run_item
instance-attribute
run_item: RunItem
The run item that was produced as a result of the tool call.
FunctionTool
dataclass
A tool that wraps a function. In most cases, you should use the function_tool
helpers to
create a FunctionTool, as they let you easily wrap a Python function.
Source code in src/agents/tool.py
name
instance-attribute
The name of the tool, as shown to the LLM. Generally the name of the function.
params_json_schema
instance-attribute
The JSON schema for the tool's parameters.
on_invoke_tool
instance-attribute
on_invoke_tool: Callable[
[RunContextWrapper[Any], str], Awaitable[Any]
]
A function that invokes the tool with the given context and parameters. The params passed are: 1. The tool run context. 2. The arguments from the LLM, as a JSON string.
You must return a string representation of the tool output, or something we can call str()
on.
In case of errors, you can either raise an Exception (which will cause the run to fail) or
return a string error message (which will be sent back to the LLM).
FileSearchTool
dataclass
A hosted tool that lets the LLM search through a vector store. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
vector_store_ids
instance-attribute
The IDs of the vector stores to search.
max_num_results
class-attribute
instance-attribute
The maximum number of results to return.
include_search_results
class-attribute
instance-attribute
Whether to include the search results in the output produced by the LLM.
ranking_options
class-attribute
instance-attribute
Ranking options for search.
WebSearchTool
dataclass
A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models, using the Responses API.
Source code in src/agents/tool.py
user_location
class-attribute
instance-attribute
Optional location for the search. Lets you customize results to be relevant to a location.
ComputerTool
dataclass
A hosted tool that lets the LLM control a computer.
Source code in src/agents/tool.py
MCPToolApprovalRequest
dataclass
A request to approve a tool call.
Source code in src/agents/tool.py
MCPToolApprovalFunctionResult
Bases: TypedDict
The result of an MCP tool approval function.
Source code in src/agents/tool.py
HostedMCPTool
dataclass
A tool that allows the LLM to use a remote MCP server. The LLM will automatically list and
call tools, without requiring a a round trip back to your code.
If you want to run MCP servers locally via stdio, in a VPC or other non-publicly-accessible
environment, or you just prefer to run tool calls locally, then you can instead use the servers
in agents.mcp
and pass Agent(mcp_servers=[...])
to the agent.
Source code in src/agents/tool.py
tool_config
instance-attribute
The MCP tool config, which includes the server URL and other settings.
on_approval_request
class-attribute
instance-attribute
on_approval_request: MCPToolApprovalFunction | None = None
An optional function that will be called if approval is requested for an MCP tool. If not
provided, you will need to manually add approvals/rejections to the input and call
Runner.run(...)
again.
CodeInterpreterTool
dataclass
A tool that allows the LLM to execute code in a sandboxed environment.
Source code in src/agents/tool.py
ImageGenerationTool
dataclass
A tool that allows the LLM to generate images.
Source code in src/agents/tool.py
LocalShellCommandRequest
dataclass
A request to execute a command on a shell.
Source code in src/agents/tool.py
LocalShellTool
dataclass
A tool that allows the LLM to execute commands on a shell.
Source code in src/agents/tool.py
executor
instance-attribute
executor: LocalShellExecutor
A function that executes a command on a shell.
default_tool_error_function
default_tool_error_function(
ctx: RunContextWrapper[Any], error: Exception
) -> str
The default tool error function, which just returns a generic error message.
Source code in src/agents/tool.py
function_tool
function_tool(
func: ToolFunction[...],
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
) -> FunctionTool
function_tool(
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction | None = None,
strict_mode: bool = True,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
func: ToolFunction[...] | None = None,
*,
name_override: str | None = None,
description_override: str | None = None,
docstring_style: DocstringStyle | None = None,
use_docstring_info: bool = True,
failure_error_function: ToolErrorFunction
| None = default_tool_error_function,
strict_mode: bool = True,
) -> (
FunctionTool
| Callable[[ToolFunction[...]], FunctionTool]
)
Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docstring style is detected automatically, but you can override it.
If the function takes a RunContextWrapper
as the first argument, it must match the
context type of the agent that uses the tool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
func
|
ToolFunction[...] | None
|
The function to wrap. |
None
|
name_override
|
str | None
|
If provided, use this name for the tool instead of the function's name. |
None
|
description_override
|
str | None
|
If provided, use this description for the tool instead of the function's docstring. |
None
|
docstring_style
|
DocstringStyle | None
|
If provided, use this style for the tool's docstring. If not provided, we will attempt to auto-detect the style. |
None
|
use_docstring_info
|
bool
|
If True, use the function's docstring to populate the tool's description and argument descriptions. |
True
|
failure_error_function
|
ToolErrorFunction | None
|
If provided, use this function to generate an error message when the tool call fails. The error message is sent to the LLM. If you pass None, then no error message will be sent and instead an Exception will be raised. |
default_tool_error_function
|
strict_mode
|
bool
|
Whether to enable strict mode for the tool's JSON schema. We strongly recommend setting this to True, as it increases the likelihood of correct JSON input. If False, it allows non-strict JSON schemas. For example, if a parameter has a default value, it will be optional, additional properties are allowed, etc. See here for more: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas |
True
|
Source code in src/agents/tool.py
284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 |
|