Context Management
Context is an overloaded term. There are two main classes of context you might care about:
- Local context that your code can access during a run: dependencies or data needed by tools, callbacks like
onHandoff, and lifecycle hooks. - Agent/LLM context that the language model can see when generating a response.
Local context
Section titled “Local context”Local context is represented by the RunContext<T> type. You create any object to hold your state or dependencies and pass it to Runner.run(). All tool calls and hooks receive a RunContext wrapper so they can read from or modify that object.
import { Agent, run, RunContext, tool } from '@openai/agents';import { z } from 'zod';
interface UserInfo { name: string; uid: number;}
const fetchUserAge = tool({ name: 'fetch_user_age', description: 'Return the age of the current user', parameters: z.object({}), execute: async ( _args, runContext?: RunContext<UserInfo>, ): Promise<string> => { return `User ${runContext?.context.name} is 47 years old`; },});
async function main() { const userInfo: UserInfo = { name: 'John', uid: 123 };
const agent = new Agent<UserInfo>({ name: 'Assistant', tools: [fetchUserAge], });
const result = await run(agent, 'What is the age of the user?', { context: userInfo, });
console.log(result.finalOutput); // The user John is 47 years old.}
main().catch((error) => { console.error(error); process.exit(1);});Every agent, tool and hook participating in a single run must use the same type of context.
Use local context for things like:
- Data about the run (user name, IDs, etc.)
- Dependencies such as loggers or data fetchers
- Helper functions
Within a single run, derived contexts share the same underlying app context, approvals, and usage tracking. Nested agent.asTool() runs may attach a different toolInput, but they do not get an isolated copy of your app state by default.
What RunContext exposes
Section titled “What RunContext exposes”RunContext<T> is a wrapper around your app-defined context object. In practice you will most often use:
runContext.contextfor your own mutable app state and dependencies.runContext.usagefor the aggregated token/request usage of the current run.runContext.toolInputfor structured input when the current run is executing insideagent.asTool().runContext.approveTool(...)/runContext.rejectTool(...)when you need to update approval state programmatically.
Only runContext.context is your app-defined object. The other fields are runtime metadata managed by the SDK.
If you later serialize a RunState for human-in-the-loop, that runtime metadata is saved with the state. Avoid putting secrets in runContext.context if you intend to persist or transmit serialized state.
If you subclass RunContext, verify that nested or derived runs still preserve any subclass-specific instance state you rely on. The SDK creates forked contexts internally during nested runs.
Agent/LLM context
Section titled “Agent/LLM context”When the LLM is called, the only data it can see comes from the conversation history. To make additional information available you have a few options:
- Add it to the Agent
instructions– also known as a system or developer message. This can be a static string or a function that receives the context and returns a string. - Include it in the
inputwhen callingRunner.run(). This is similar to the instructions technique but lets you place the message lower in the chain of command. - Expose it via function tools so the LLM can fetch data on demand.
- Use retrieval or web search tools to ground responses in relevant data from files, databases, or the web.