Results
When you run your agent, you will either receive a:
RunResult
if you callrun
withoutstream: true
StreamedRunResult
if you callrun
withstream: true
. For details on streaming, also check the streaming guide.
Final output
Section titled “Final output”The finalOutput
property contains the final output of the last agent that ran. This result is either:
string
— default for any agent that has nooutputType
definedunknown
— if the agent has a JSON schema defined as output type. In this case the JSON was parsed but you still have to verify its type manually.z.infer<outputType>
— if the agent has a Zod schema defined as output type. The output will automatically be parsed against this schema.undefined
— if the agent did not produce an output (for example stopped before it could produce an output)
If you are using handoffs with different output types, you should use the Agent.create()
method instead of the new Agent()
constructor to create your agents.
This will enable the SDK to infer the output types across all possible handoffs and provide a union type for the finalOutput
property.
For example:
import { Agent, run } from '@openai/agents';import { z } from 'zod';
const refundAgent = new Agent({ name: 'Refund Agent', instructions: 'You are a refund agent. You are responsible for refunding customers.', outputType: z.object({ refundApproved: z.boolean(), }),});
const orderAgent = new Agent({ name: 'Order Agent', instructions: 'You are an order agent. You are responsible for processing orders.', outputType: z.object({ orderId: z.string(), }),});
const triageAgent = Agent.create({ name: 'Triage Agent', instructions: 'You are a triage agent. You are responsible for triaging customer issues.', handoffs: [refundAgent, orderAgent],});
const result = await run(triageAgent, 'I need to a refund for my order');
const output = result.finalOutput;// ^? { refundApproved: boolean } | { orderId: string } | string | undefined
Inputs for the next turn
Section titled “Inputs for the next turn”There are two ways you can access the inputs for your next turn:
result.history
— contains a copy of both your input and the output of the agents.result.output
— contains the output of the full agent run.
history
is a convenient way to maintain a full history in a chat-like use case:
import { AgentInputItem, Agent, user, run } from '@openai/agents';
const agent = new Agent({ name: 'Assistant', instructions: 'You are a helpful assistant knowledgeable about recent AGI research.',});
let history: AgentInputItem[] = [ // intial message user('Are we there yet?'),];
for (let i = 0; i < 10; i++) { // run 10 times const result = await run(agent, history);
// update the history to the new output history = result.history;
history.push(user('How about now?'));}
Last agent
Section titled “Last agent”The lastAgent
property contains the last agent that ran. Depending on your application, this is often useful for the next time the user inputs something. For example, if you have a frontline triage agent that hands off to a language-specific agent, you can store the last agent, and re-use it the next time the user messages the agent.
In streaming mode it can also be useful to access the currentAgent
property that’s mapping to the current agent that is running.
New items
Section titled “New items”The newItems
property contains the new items generated during the run. The items are RunItem
s. A run item wraps the raw item generated by the LLM. These can be used to access additionally to the output of the LLM which agent these events were associated with.
RunMessageOutputItem
indicates a message from the LLM. The raw item is the message generated.RunHandoffCallItem
indicates that the LLM called the handoff tool. The raw item is the tool call item from the LLM.RunHandoffOutputItem
indicates that a handoff occurred. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item.RunToolCallItem
indicates that the LLM invoked a tool.RunToolCallOutputItem
indicates that a tool was called. The raw item is the tool response. You can also access the tool output from the item.RunReasoningItem
indicates a reasoning item from the LLM. The raw item is the reasoning generated.RunToolApprovalItem
indicates that the LLM requested approval for a tool call. The raw item is the tool call item from the LLM.
The state
property contains the state of the run. Most of what is attached to the result
is derived from the state
but the state
is serializable/deserializable and can also be used as input for a subsequent call to run
in case you need to recover from an error or deal with an interruption
.
Interruptions
Section titled “Interruptions”If you are using needsApproval
in your agent, your run
might trigger some interruptions
that you need to handle before continuing. In that case interruptions
will be an array of ToolApprovalItem
s that caused the interruption. Check out the human-in-the-loop guide for more information on how to work with interruptions.
Other information
Section titled “Other information”Raw responses
Section titled “Raw responses”The rawResponses
property contains the raw LLM responses generated by the model during the agent run.
Last response ID
Section titled “Last response ID”The lastResponseId
property contains the ID of the last response generated by the model during the agent run.
Guardrail results
Section titled “Guardrail results”The inputGuardrailResults
and outputGuardrailResults
properties contain the results of the guardrails, if any. Guardrail results can sometimes contain useful information you want to log or store, so we make these available to you.
Original input
Section titled “Original input”The input
property contains the original input you provided to the run method. In most cases you won’t need this, but it’s available in case you do.