Agents
ToolsToFinalOutputFunction
module-attribute
ToolsToFinalOutputFunction: TypeAlias = Callable[
[RunContextWrapper[TContext], list[FunctionToolResult]],
MaybeAwaitable[ToolsToFinalOutputResult],
]
A function that takes a run context and a list of tool results, and returns a
ToolsToFinalOutputResult
.
ToolsToFinalOutputResult
dataclass
Source code in src/agents/agent.py
is_final_output
instance-attribute
Whether this is the final output. If False, the LLM will run again and receive the tool call output.
StopAtTools
Bases: TypedDict
Source code in src/agents/agent.py
MCPConfig
Bases: TypedDict
Configuration for MCP servers.
Source code in src/agents/agent.py
Agent
dataclass
Bases: Generic[TContext]
An agent is an AI model configured with instructions, tools, guardrails, handoffs and more.
We strongly recommend passing instructions
, which is the "system prompt" for the agent. In
addition, you can pass handoff_description
, which is a human-readable description of the
agent, used when the agent is used inside tools/handoffs.
Agents are generic on the context type. The context is a (mutable) object you create. It is passed to tool functions, handoffs, guardrails, etc.
Source code in src/agents/agent.py
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
instructions
class-attribute
instance-attribute
instructions: (
str
| Callable[
[RunContextWrapper[TContext], Agent[TContext]],
MaybeAwaitable[str],
]
| None
) = None
The instructions for the agent. Will be used as the "system prompt" when this agent is invoked. Describes what the agent should do, and how it responds.
Can either be a string, or a function that dynamically generates instructions for the agent. If you provide a function, it will be called with the context and the agent instance. It must return a string.
handoff_description
class-attribute
instance-attribute
A description of the agent. This is used when the agent is used as a handoff, so that an LLM knows what it does and when to invoke it.
handoffs
class-attribute
instance-attribute
Handoffs are sub-agents that the agent can delegate to. You can provide a list of handoffs, and the agent can choose to delegate to them if relevant. Allows for separation of concerns and modularity.
model
class-attribute
instance-attribute
model: str | Model | None = None
The model implementation to use when invoking the LLM.
By default, if not set, the agent will use the default model configured in
openai_provider.DEFAULT_MODEL
(currently "gpt-4o").
model_settings
class-attribute
instance-attribute
model_settings: ModelSettings = field(
default_factory=ModelSettings
)
Configures model-specific tuning parameters (e.g. temperature, top_p).
tools
class-attribute
instance-attribute
tools: list[Tool] = field(default_factory=list)
A list of tools that the agent can use.
mcp_servers
class-attribute
instance-attribute
mcp_servers: list[MCPServer] = field(default_factory=list)
A list of Model Context Protocol servers that the agent can use. Every time the agent runs, it will include tools from these servers in the list of available tools.
NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call
server.connect()
before passing it to the agent, and server.cleanup()
when the server is no
longer needed.
mcp_config
class-attribute
instance-attribute
Configuration for MCP servers.
input_guardrails
class-attribute
instance-attribute
input_guardrails: list[InputGuardrail[TContext]] = field(
default_factory=list
)
A list of checks that run in parallel to the agent's execution, before generating a response. Runs only if the agent is the first agent in the chain.
output_guardrails
class-attribute
instance-attribute
output_guardrails: list[OutputGuardrail[TContext]] = field(
default_factory=list
)
A list of checks that run on the final output of the agent, after generating a response. Runs only if the agent produces a final output.
output_type
class-attribute
instance-attribute
The type of the output object. If not provided, the output will be str
.
hooks
class-attribute
instance-attribute
hooks: AgentHooks[TContext] | None = None
A class that receives callbacks on various lifecycle events for this agent.
tool_use_behavior
class-attribute
instance-attribute
tool_use_behavior: (
Literal["run_llm_again", "stop_on_first_tool"]
| StopAtTools
| ToolsToFinalOutputFunction
) = "run_llm_again"
This lets you configure how tool use is handled.
- "run_llm_again": The default behavior. Tools are run, and then the LLM receives the results
and gets to respond.
- "stop_on_first_tool": The output of the first tool call is used as the final output. This
means that the LLM does not process the result of the tool call.
- A list of tool names: The agent will stop running if any of the tools in the list are called.
The final output will be the output of the first matching tool call. The LLM does not
process the result of the tool call.
- A function: If you pass a function, it will be called with the run context and the list of
tool results. It must return a ToolToFinalOutputResult
, which determines whether the tool
calls result in a final output.
NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search, web search, etc are always processed by the LLM.
reset_tool_choice
class-attribute
instance-attribute
Whether to reset the tool choice to the default value after a tool has been called. Defaults to True. This ensures that the agent doesn't enter an infinite loop of tool usage.
clone
clone(**kwargs: Any) -> Agent[TContext]
Make a copy of the agent, with the given arguments changed. For example, you could do:
Source code in src/agents/agent.py
as_tool
as_tool(
tool_name: str | None,
tool_description: str | None,
custom_output_extractor: Callable[
[RunResult], Awaitable[str]
]
| None = None,
) -> Tool
Transform this agent into a tool, callable by other agents.
This is different from handoffs in two ways: 1. In handoffs, the new agent receives the conversation history. In this tool, the new agent receives generated input. 2. In handoffs, the new agent takes over the conversation. In this tool, the new agent is called as a tool, and the conversation is continued by the original agent.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tool_name
|
str | None
|
The name of the tool. If not provided, the agent's name will be used. |
required |
tool_description
|
str | None
|
The description of the tool, which should indicate what it does and when to use it. |
required |
custom_output_extractor
|
Callable[[RunResult], Awaitable[str]] | None
|
A function that extracts the output from the agent. If not provided, the last message from the agent will be used. |
None
|
Source code in src/agents/agent.py
get_system_prompt
async
get_system_prompt(
run_context: RunContextWrapper[TContext],
) -> str | None
Get the system prompt for the agent.
Source code in src/agents/agent.py
get_mcp_tools
async
get_mcp_tools() -> list[Tool]
Fetches the available tools from the MCP servers.
Source code in src/agents/agent.py
get_all_tools
async
get_all_tools() -> list[Tool]