Results
When you run your agent, you will either receive a:
RunResultif you callrunwithoutstream: trueStreamedRunResultif you callrunwithstream: true. For details on streaming, also check the streaming guide.
Both result types expose the same core result surfaces such as finalOutput, newItems, interruptions, and state. StreamedRunResult adds streaming controls like completed, toStream(), toTextStream(), and currentAgent.
Choose the right result surface
Section titled “Choose the right result surface”Most applications only need a few properties:
| If you need… | Use |
|---|---|
| The final answer to show the user | finalOutput |
| A replay-ready next-turn input with the full local transcript | history |
| Only the newly generated model-shaped items from this run | output |
| Rich run items with agent/tool/handoff metadata | newItems |
| The agent that should usually handle the next user turn | lastAgent or activeAgent |
OpenAI Responses API chaining with previousResponseId | lastResponseId |
| Pending approvals and a resumable snapshot | interruptions and state |
| App context, approvals, usage, and nested agent-tool input | runContext |
Metadata about the current nested Agent.asTool() invocation, for example inside customOutputExtractor | agentToolInvocation |
| Raw model calls or guardrail diagnostics | rawResponses and the guardrail result arrays |
Final output
Section titled “Final output”The finalOutput property contains the final output of the last agent that ran. This result is either:
string— default for any agent that has nooutputTypedefinedunknown— if the agent has a JSON schema defined as output type. In this case the JSON was parsed but you still have to verify its type manually.z.infer<outputType>— if the agent has a Zod schema defined as output type. The output will automatically be parsed against this schema.undefined— if the agent did not produce an output (for example stopped before it could produce an output)
finalOutput is also undefined while a streamed run is still in progress or when the run paused on an approval interruption before reaching a final output.
If you are using handoffs with different output types, you should use the Agent.create() method instead of the new Agent() constructor to create your agents.
This will enable the SDK to infer the output types across all possible handoffs and provide a union type for the finalOutput property.
For example:
import { Agent, run } from '@openai/agents';import { z } from 'zod';
const refundAgent = new Agent({ name: 'Refund Agent', instructions: 'You are a refund agent. You are responsible for refunding customers.', outputType: z.object({ refundApproved: z.boolean(), }),});
const orderAgent = new Agent({ name: 'Order Agent', instructions: 'You are an order agent. You are responsible for processing orders.', outputType: z.object({ orderId: z.string(), }),});
const triageAgent = Agent.create({ name: 'Triage Agent', instructions: 'You are a triage agent. You are responsible for triaging customer issues.', handoffs: [refundAgent, orderAgent],});
const result = await run(triageAgent, 'I need to a refund for my order');
const output = result.finalOutput;// ^? { refundApproved: boolean } | { orderId: string } | string | undefinedInput and output surfaces
Section titled “Input and output surfaces”These properties answer different questions:
| Property | What it contains | Best for |
|---|---|---|
input | The base input for this run. If a handoff input filter rewrote the history, this reflects the filtered input the run continued with. | Auditing what this run actually used as input |
output | Only the model-shaped items generated in this run, without agent metadata. | Storing or replaying just the new model delta |
newItems | Rich RunItem wrappers with agent/tool/handoff metadata. | Logs, UIs, audits, and debugging |
history | A replay-ready next-turn input built from input + newItems. | Manual chat loops and client-managed conversation state |
In practice:
- Use
historywhen you are manually carrying the whole conversation in your application. - Use
outputwhen you already store prior history elsewhere and only want the new generated items from this run. - Use
newItemswhen you need agent associations, tool outputs, handoff boundaries, or approval items. - If you are using
conversationIdorpreviousResponseId, you usually do not passhistoryback intorun(). Instead, pass only the new user input and reuse the server-managed ID. See Running agents for the full comparison.
history is a convenient way to maintain a full history in a chat-like use case:
import { Agent, user, run } from '@openai/agents';import type { AgentInputItem } from '@openai/agents';
const agent = new Agent({ name: 'Assistant', instructions: 'You are a helpful assistant knowledgeable about recent AGI research.',});
let history: AgentInputItem[] = [ // initial message user('Are we there yet?'),];
for (let i = 0; i < 10; i++) { // run 10 times const result = await run(agent, history);
// update the history to the new output history = result.history;
history.push(user('How about now?'));}New items
Section titled “New items”newItems gives you the richest view of what happened during the run. Common item types are:
RunMessageOutputItemfor assistant messages.RunReasoningItemfor reasoning items.RunToolSearchCallItemandRunToolSearchOutputItemfor Responses tool-search requests and the loaded tool definitions they return.RunToolCallItemandRunToolCallOutputItemfor tool calls and their results.RunToolApprovalItemfor tool calls that paused for approval.RunHandoffCallItemandRunHandoffOutputItemfor handoff requests and completed transfers.
Choose newItems over output whenever you need to know which agent produced an item or whether it marks a tool, tool-search, handoff, or approval boundary. When you use toolSearchTool(), these tool-search items are the easiest way to inspect which deferred tools or namespaces were loaded before the normal tool call happened.
Continue or resume the conversation
Section titled “Continue or resume the conversation”Active agent
Section titled “Active agent”The lastAgent property contains the last agent that ran. This is often the best agent to reuse for the next user turn after handoffs. activeAgent is an alias for the same value.
In streaming mode, currentAgent tells you which agent is currently active while the run is still in progress.
Interruptions and resumable state
Section titled “Interruptions and resumable state”If a tool needs approval, the run pauses and interruptions contains the pending RunToolApprovalItems. This can include approvals raised by direct tools, by tools reached after a handoff, or by nested agent.asTool() runs.
Resolve approvals through result.state.approve(...) / result.state.reject(...), then pass the same state back into run() to resume. You do not need to resolve every interruption at once. If you rerun after handling only some items, resolved calls can continue while unresolved ones stay pending and pause the run again.
The state property is the serializable snapshot behind the result. Use it for human-in-the-loop, retry flows, or any case where you need to resume a paused run later.
Server-managed continuation
Section titled “Server-managed continuation”lastResponseId is the value to pass as previousResponseId on the next turn when you are using OpenAI Responses API chaining.
If you are already continuing the conversation with history, session, or conversationId, you usually do not need lastResponseId. If you need every raw model response from a multi-step run, inspect rawResponses instead.
Nested agent-tool metadata
Section titled “Nested agent-tool metadata”agentToolInvocation is for nested Agent.asTool() results, especially when you are inside customOutputExtractor and want metadata about the current tool invocation. It is not a general “the whole run has finished” summary field.
In that nested context, agentToolInvocation exposes:
toolNametoolCallIdtoolArguments
Pair it with result.runContext.toolInput when you also need the structured input passed into that nested agent-tool run.
On normal top-level run() results this is usually undefined. The metadata is runtime-only and is not serialized into RunState. See Agents as tools for the surrounding pattern.
Streamed results
Section titled “Streamed results”StreamedRunResult inherits the same result surfaces above, but adds streaming-specific controls:
toTextStream()for assistant text only.toStream()orfor await ... of streamfor the full event stream.completedto wait until the run and all post-processing callbacks finish.errorandcancelledto inspect the terminal streaming state.currentAgentto track the active agent mid-run.
If you need the settled final state of a streamed run, wait for completed before reading finalOutput, history, interruptions, or other summary properties. For event-by-event handling, see the streaming guide.
Diagnostics and advanced fields
Section titled “Diagnostics and advanced fields”Run context
Section titled “Run context”The runContext property is the supported public view of the run context on the result. result.runContext.context is your app context, and the same object also carries SDK-managed runtime metadata such as approvals, usage, and nested toolInput. See Context for the full shape.
Raw responses
Section titled “Raw responses”rawResponses contains the raw model responses collected during the run. Multi-step runs can produce more than one response, for example across handoffs or repeated tool/model cycles.
Guardrail results
Section titled “Guardrail results”The inputGuardrailResults and outputGuardrailResults properties contain agent-level guardrail results. Tool guardrail results are exposed separately via toolInputGuardrailResults and toolOutputGuardrailResults.
Use these arrays when you want to log guardrail decisions, inspect extra metadata returned by guardrail functions, or debug why a run was blocked.
Token usage is aggregated in result.state.usage, which tracks request counts and token totals for the run. The same usage object is also available through result.runContext.usage. For streaming runs this data updates as responses arrive.
import { Agent, run } from '@openai/agents';
const agent = new Agent({ name: 'Usage Tracker', instructions: 'Summarize the latest project update in one sentence.',});
const result = await run( agent, 'Summarize this: key customer feedback themes and the next product iteration.',);
const usage = result.state.usage;console.log({ requests: usage.requests, inputTokens: usage.inputTokens, outputTokens: usage.outputTokens, totalTokens: usage.totalTokens,});
if (usage.requestUsageEntries) { for (const entry of usage.requestUsageEntries) { console.log('request', { endpoint: entry.endpoint, inputTokens: entry.inputTokens, outputTokens: entry.outputTokens, totalTokens: entry.totalTokens, }); }}