Context management
Context is an overloaded term. There are two main classes of context you might care about:
- Local context that your code can access during a run: dependencies or data needed by tools, callbacks like
onHandoff
, and lifecycle hooks. - Agent/LLM context that the language model can see when generating a response.
Local context
Section titled “Local context”Local context is represented by the RunContext<T>
type. You create any object to hold your state or dependencies and pass it to Runner.run()
. All tool calls and hooks receive a RunContext
wrapper so they can read from or modify that object.
import { Agent, run, RunContext, tool } from '@openai/agents';import { z } from 'zod';
interface UserInfo { name: string; uid: number;}
const fetchUserAge = tool({ name: 'fetch_user_age', description: 'Return the age of the current user', parameters: z.object({}), execute: async ( _args, runContext?: RunContext<UserInfo>, ): Promise<string> => { return `User ${runContext?.context.name} is 47 years old`; },});
async function main() { const userInfo: UserInfo = { name: 'John', uid: 123 };
const agent = new Agent<UserInfo>({ name: 'Assistant', tools: [fetchUserAge], });
const result = await run(agent, 'What is the age of the user?', { context: userInfo, });
console.log(result.finalOutput); // The user John is 47 years old.}
if (require.main === module) { main().catch(console.error);}
Every agent, tool and hook participating in a single run must use the same type of context.
Use local context for things like:
- Data about the run (user name, IDs, etc.)
- Dependencies such as loggers or data fetchers
- Helper functions
Agent/LLM context
Section titled “Agent/LLM context”When the LLM is called, the only data it can see comes from the conversation history. To make additional information available you have a few options:
- Add it to the Agent
instructions
– also known as a system or developer message. This can be a static string or a function that receives the context and returns a string. - Include it in the
input
when callingRunner.run()
. This is similar to the instructions technique but lets you place the message lower in the chain of command. - Expose it via function tools so the LLM can fetch data on demand.
- Use retrieval or web search tools to ground responses in relevant data from files, databases, or the web.