Agents
ToolsToFinalOutputFunction
module-attribute
ToolsToFinalOutputFunction: TypeAlias = Callable[
[RunContextWrapper[TContext], list[FunctionToolResult]],
MaybeAwaitable[ToolsToFinalOutputResult],
]
A function that takes a run context and a list of tool results, and returns a
ToolsToFinalOutputResult.
ToolsToFinalOutputResult
dataclass
Source code in src/agents/agent.py
is_final_output
instance-attribute
Whether this is the final output. If False, the LLM will run again and receive the tool call output.
AgentToolStreamEvent
Bases: TypedDict
Streaming event emitted when an agent is invoked as a tool.
Source code in src/agents/agent.py
StopAtTools
Bases: TypedDict
Source code in src/agents/agent.py
MCPConfig
Bases: TypedDict
Configuration for MCP servers.
Source code in src/agents/agent.py
convert_schemas_to_strict
instance-attribute
If True, we will attempt to convert the MCP schemas to strict-mode schemas. This is a best-effort conversion, so some schemas may not be convertible. Defaults to False.
AgentBase
dataclass
Bases: Generic[TContext]
Base class for Agent and RealtimeAgent.
Source code in src/agents/agent.py
handoff_description
class-attribute
instance-attribute
A description of the agent. This is used when the agent is used as a handoff, so that an LLM knows what it does and when to invoke it.
tools
class-attribute
instance-attribute
tools: list[Tool] = field(default_factory=list)
A list of tools that the agent can use.
mcp_servers
class-attribute
instance-attribute
mcp_servers: list[MCPServer] = field(default_factory=list)
A list of Model Context Protocol servers that the agent can use. Every time the agent runs, it will include tools from these servers in the list of available tools.
NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call
server.connect() before passing it to the agent, and server.cleanup() when the server is no
longer needed. Consider using MCPServerManager from agents.mcp to keep connect/cleanup
in the same task.
mcp_config
class-attribute
instance-attribute
Configuration for MCP servers.
get_mcp_tools
async
get_mcp_tools(
run_context: RunContextWrapper[TContext],
) -> list[Tool]
Fetches the available tools from the MCP servers.
Source code in src/agents/agent.py
get_all_tools
async
get_all_tools(
run_context: RunContextWrapper[TContext],
) -> list[Tool]
All agent tools, including MCP tools and function tools.
Source code in src/agents/agent.py
Agent
dataclass
Bases: AgentBase, Generic[TContext]
An agent is an AI model configured with instructions, tools, guardrails, handoffs and more.
We strongly recommend passing instructions, which is the "system prompt" for the agent. In
addition, you can pass handoff_description, which is a human-readable description of the
agent, used when the agent is used inside tools/handoffs.
Agents are generic on the context type. The context is a (mutable) object you create. It is passed to tool functions, handoffs, guardrails, etc.
See AgentBase for base parameters that are shared with RealtimeAgents.
Source code in src/agents/agent.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 | |
instructions
class-attribute
instance-attribute
instructions: (
str
| Callable[
[RunContextWrapper[TContext], Agent[TContext]],
MaybeAwaitable[str],
]
| None
) = None
The instructions for the agent. Will be used as the "system prompt" when this agent is invoked. Describes what the agent should do, and how it responds.
Can either be a string, or a function that dynamically generates instructions for the agent. If you provide a function, it will be called with the context and the agent instance. It must return a string.
prompt
class-attribute
instance-attribute
prompt: Prompt | DynamicPromptFunction | None = None
A prompt object (or a function that returns a Prompt). Prompts allow you to dynamically configure the instructions, tools and other config for an agent outside of your code. Only usable with OpenAI models, using the Responses API.
handoffs
class-attribute
instance-attribute
Handoffs are sub-agents that the agent can delegate to. You can provide a list of handoffs, and the agent can choose to delegate to them if relevant. Allows for separation of concerns and modularity.
model
class-attribute
instance-attribute
model: str | Model | None = None
The model implementation to use when invoking the LLM.
By default, if not set, the agent will use the default model configured in
agents.models.get_default_model() (currently "gpt-4.1").
model_settings
class-attribute
instance-attribute
model_settings: ModelSettings = field(
default_factory=get_default_model_settings
)
Configures model-specific tuning parameters (e.g. temperature, top_p).
input_guardrails
class-attribute
instance-attribute
input_guardrails: list[InputGuardrail[TContext]] = field(
default_factory=list
)
A list of checks that run in parallel to the agent's execution, before generating a response. Runs only if the agent is the first agent in the chain.
output_guardrails
class-attribute
instance-attribute
output_guardrails: list[OutputGuardrail[TContext]] = field(
default_factory=list
)
A list of checks that run on the final output of the agent, after generating a response. Runs only if the agent produces a final output.
output_type
class-attribute
instance-attribute
output_type: type[Any] | AgentOutputSchemaBase | None = None
The type of the output object. If not provided, the output will be str. In most cases,
you should pass a regular Python type (e.g. a dataclass, Pydantic model, TypedDict, etc).
You can customize this in two ways:
1. If you want non-strict schemas, pass AgentOutputSchema(MyClass, strict_json_schema=False).
2. If you want to use a custom JSON schema (i.e. without using the SDK's automatic schema)
creation, subclass and pass an AgentOutputSchemaBase subclass.
hooks
class-attribute
instance-attribute
hooks: AgentHooks[TContext] | None = None
A class that receives callbacks on various lifecycle events for this agent.
tool_use_behavior
class-attribute
instance-attribute
tool_use_behavior: (
Literal["run_llm_again", "stop_on_first_tool"]
| StopAtTools
| ToolsToFinalOutputFunction
) = "run_llm_again"
This lets you configure how tool use is handled.
- "run_llm_again": The default behavior. Tools are run, and then the LLM receives the results
and gets to respond.
- "stop_on_first_tool": The output from the first tool call is treated as the final result.
In other words, it isn’t sent back to the LLM for further processing but is used directly
as the final output.
- A StopAtTools object: The agent will stop running if any of the tools listed in
stop_at_tool_names is called.
The final output will be the output of the first matching tool call.
The LLM does not process the result of the tool call.
- A function: If you pass a function, it will be called with the run context and the list of
tool results. It must return a ToolsToFinalOutputResult, which determines whether the tool
calls result in a final output.
NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search, web search, etc. are always processed by the LLM.
reset_tool_choice
class-attribute
instance-attribute
Whether to reset the tool choice to the default value after a tool has been called. Defaults to True. This ensures that the agent doesn't enter an infinite loop of tool usage.
handoff_description
class-attribute
instance-attribute
A description of the agent. This is used when the agent is used as a handoff, so that an LLM knows what it does and when to invoke it.
tools
class-attribute
instance-attribute
tools: list[Tool] = field(default_factory=list)
A list of tools that the agent can use.
mcp_servers
class-attribute
instance-attribute
mcp_servers: list[MCPServer] = field(default_factory=list)
A list of Model Context Protocol servers that the agent can use. Every time the agent runs, it will include tools from these servers in the list of available tools.
NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call
server.connect() before passing it to the agent, and server.cleanup() when the server is no
longer needed. Consider using MCPServerManager from agents.mcp to keep connect/cleanup
in the same task.
mcp_config
class-attribute
instance-attribute
Configuration for MCP servers.
clone
clone(**kwargs: Any) -> Agent[TContext]
Make a copy of the agent, with the given arguments changed.
Notes:
- Uses dataclasses.replace, which performs a shallow copy.
- Mutable attributes like tools and handoffs are shallow-copied:
new list objects are created only if overridden, but their contents
(tool functions and handoff objects) are shared with the original.
- To modify these independently, pass new lists when calling clone().
Example:
Source code in src/agents/agent.py
as_tool
as_tool(
tool_name: str | None,
tool_description: str | None,
custom_output_extractor: Callable[
[RunResult | RunResultStreaming], Awaitable[str]
]
| None = None,
is_enabled: bool
| Callable[
[RunContextWrapper[Any], AgentBase[Any]],
MaybeAwaitable[bool],
] = True,
on_stream: Callable[
[AgentToolStreamEvent], MaybeAwaitable[None]
]
| None = None,
run_config: RunConfig | None = None,
max_turns: int | None = None,
hooks: RunHooks[TContext] | None = None,
previous_response_id: str | None = None,
conversation_id: str | None = None,
session: Session | None = None,
failure_error_function: ToolErrorFunction
| None = default_tool_error_function,
needs_approval: bool
| Callable[
[RunContextWrapper[Any], dict[str, Any], str],
Awaitable[bool],
] = False,
parameters: type[Any] | None = None,
input_builder: StructuredToolInputBuilder | None = None,
include_input_schema: bool = False,
) -> FunctionTool
Transform this agent into a tool, callable by other agents.
This is different from handoffs in two ways: 1. In handoffs, the new agent receives the conversation history. In this tool, the new agent receives generated input. 2. In handoffs, the new agent takes over the conversation. In this tool, the new agent is called as a tool, and the conversation is continued by the original agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tool_name
|
str | None
|
The name of the tool. If not provided, the agent's name will be used. |
required |
tool_description
|
str | None
|
The description of the tool, which should indicate what it does and when to use it. |
required |
custom_output_extractor
|
Callable[[RunResult | RunResultStreaming], Awaitable[str]] | None
|
A function that extracts the output from the agent. If not provided, the last message from the agent will be used. |
None
|
is_enabled
|
bool | Callable[[RunContextWrapper[Any], AgentBase[Any]], MaybeAwaitable[bool]]
|
Whether the tool is enabled. Can be a bool or a callable that takes the run context and agent and returns whether the tool is enabled. Disabled tools are hidden from the LLM at runtime. |
True
|
on_stream
|
Callable[[AgentToolStreamEvent], MaybeAwaitable[None]] | None
|
Optional callback (sync or async) to receive streaming events from the nested
agent run. The callback receives an |
None
|
failure_error_function
|
ToolErrorFunction | None
|
If provided, generate an error message when the tool (agent) run fails. The message is sent to the LLM. If None, the exception is raised instead. |
default_tool_error_function
|
needs_approval
|
bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]]
|
Bool or callable to decide if this agent tool should pause for approval. |
False
|
parameters
|
type[Any] | None
|
Structured input type for the tool arguments (dataclass or Pydantic model). |
None
|
input_builder
|
StructuredToolInputBuilder | None
|
Optional function to build the nested agent input from structured data. |
None
|
include_input_schema
|
bool
|
Whether to include the full JSON schema in structured input. |
False
|
Source code in src/agents/agent.py
466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 | |
get_prompt
async
get_prompt(
run_context: RunContextWrapper[TContext],
) -> ResponsePromptParam | None
get_mcp_tools
async
get_mcp_tools(
run_context: RunContextWrapper[TContext],
) -> list[Tool]
Fetches the available tools from the MCP servers.
Source code in src/agents/agent.py
get_all_tools
async
get_all_tools(
run_context: RunContextWrapper[TContext],
) -> list[Tool]
All agent tools, including MCP tools and function tools.