Skip to content

Models

Every Agent ultimately calls an LLM. The SDK abstracts models behind two lightweight interfaces:

  • Model – knows how to make one request against a specific API.
  • ModelProvider – resolves human‑readable model names (e.g. 'gpt‑5.2') to Model instances.

In day‑to‑day work you normally only interact with model names and occasionally ModelSettings.

Specifying a model per‑agent
import { Agent } from '@openai/agents';
const agent = new Agent({
name: 'Creative writer',
model: 'gpt-5.2',
});

When you don’t specify a model when initializing an Agent, the default model will be used. The default is currently gpt-4.1 for compatibility and low latency. If you have access, we recommend setting your agents to gpt-5.2 for higher quality while keeping explicit modelSettings.

If you want to switch to other models like gpt-5.2, there are two ways to configure your agents.

First, if you want to consistently use a specific model for all agents that do not set a custom model, set the OPENAI_DEFAULT_MODEL environment variable before running your agents.

Terminal window
export OPENAI_DEFAULT_MODEL=gpt-5.2
node my-awesome-agent.js

Second, you can set a default model for a Runner instance. If you don’t set a model for an agent, this Runner’s default model will be used.

Set a default model for a Runner
import { Runner } from '@openai/agents';
const runner = new Runner({ model: 'gpt‑4.1-mini' });

When you use any GPT-5.x model such as gpt-5.2 in this way, the SDK applies default modelSettings. It sets the ones that work the best for most use cases. To adjust the reasoning effort for the default model, pass your own modelSettings:

Customize GPT-5 default settings
import { Agent } from '@openai/agents';
const myAgent = new Agent({
name: 'My Agent',
instructions: "You're a helpful agent.",
// If OPENAI_DEFAULT_MODEL=gpt-5.2 is set, passing only modelSettings works.
// It's also fine to pass a GPT-5.x model name explicitly:
model: 'gpt-5.2',
modelSettings: {
reasoning: { effort: 'high' },
text: { verbosity: 'low' },
},
});

For lower latency, using reasoning.effort: "none" with gpt-5.2 is recommended. The gpt-4.1 family (including mini and nano variants) also remains a solid choice for building interactive agent apps.

If you pass a non–GPT-5 model name without custom modelSettings, the SDK reverts to generic modelSettings compatible with any model.


The default ModelProvider resolves names using the OpenAI APIs. It supports two distinct endpoints:

APIUsageCall setOpenAIAPI()
Chat CompletionsStandard chat & function callssetOpenAIAPI('chat_completions')
ResponsesNew streaming‑first generative API (tool calls, flexible outputs)setOpenAIAPI('responses') (default)
Set default OpenAI key
import { setDefaultOpenAIKey } from '@openai/agents';
setDefaultOpenAIKey(process.env.OPENAI_API_KEY!); // sk-...

You can also plug your own OpenAI client via setDefaultOpenAIClient(client) if you need custom networking settings.


ModelSettings mirrors the OpenAI parameters but is provider‑agnostic.

FieldTypeNotes
temperaturenumberCreativity vs. determinism.
topPnumberNucleus sampling.
frequencyPenaltynumberPenalise repeated tokens.
presencePenaltynumberEncourage new tokens.
toolChoice'auto' | 'required' | 'none' | stringSee forcing tool use.
parallelToolCallsbooleanAllow parallel function calls where supported.
truncation'auto' | 'disabled'Token truncation strategy.
maxTokensnumberMaximum tokens in the response.
storebooleanPersist the response for retrieval / RAG workflows.
reasoning.effort'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh'Reasoning effort for gpt-5.x models.
text.verbosity'low' | 'medium' | 'high'Text verbosity for gpt-5.x etc.

Attach settings at either level:

Model settings
import { Runner, Agent } from '@openai/agents';
const agent = new Agent({
name: 'Creative writer',
// ...
modelSettings: { temperature: 0.7, toolChoice: 'auto' },
});
// or globally
new Runner({ modelSettings: { temperature: 0.3 } });

Runner‑level settings override any conflicting per‑agent settings.


Agents can be configured with a prompt parameter, indicating a server-stored prompt configuration that should be used to control the Agent’s behavior. Currently, this option is only supported when you use the OpenAI Responses API.

FieldTypeNotes
promptIdstringUnique identifier for a prompt.
versionstringVersion of the prompt you wish to use.
variablesobjectA key/value pair of variables to substitute into the prompt. Values can be strings or content input types like text, images, or files.
Agent with prompt
import { parseArgs } from 'node:util';
import { Agent, run } from '@openai/agents';
/*
NOTE: This example will not work out of the box, because the default prompt ID will not
be available in your project.
To use it, please:
1. Go to https://platform.openai.com/playground/prompts
2. Create a new prompt variable, `poem_style`.
3. Create a system prompt with the content:
Write a poem in {{poem_style}}
4. Run the example with the `--prompt-id` flag.
*/
const DEFAULT_PROMPT_ID =
'pmpt_6965a984c7ac8194a8f4e79b00f838840118c1e58beb3332';
const POEM_STYLES = ['limerick', 'haiku', 'ballad'];
function pickPoemStyle(): string {
return POEM_STYLES[Math.floor(Math.random() * POEM_STYLES.length)];
}
async function runDynamic(promptId: string) {
const poemStyle = pickPoemStyle();
console.log(`[debug] Dynamic poem_style: ${poemStyle}`);
const agent = new Agent({
name: 'Assistant',
prompt: {
promptId,
version: '1',
variables: { poem_style: poemStyle },
},
});
const result = await run(agent, 'Tell me about recursion in programming.');
console.log(result.finalOutput);
}
async function runStatic(promptId: string) {
const agent = new Agent({
name: 'Assistant',
prompt: {
promptId,
version: '1',
variables: { poem_style: 'limerick' },
},
});
const result = await run(agent, 'Tell me about recursion in programming.');
console.log(result.finalOutput);
}
async function main() {
const args = parseArgs({
options: {
dynamic: { type: 'boolean', default: false },
'prompt-id': { type: 'string', default: DEFAULT_PROMPT_ID },
},
});
const promptId = args.values['prompt-id'];
if (!promptId) {
console.error('Please provide a prompt ID via --prompt-id.');
process.exit(1);
}
if (args.values.dynamic) {
await runDynamic(promptId);
} else {
await runStatic(promptId);
}
}
main().catch((error) => {
console.error(error);
process.exit(1);
});

Any additional agent configuration, like tools or instructions, will override the values you may have configured in your stored prompt.


Implementing your own provider is straightforward – implement ModelProvider and Model and pass the provider to the Runner constructor:

Minimal custom provider
import {
ModelProvider,
Model,
ModelRequest,
ModelResponse,
ResponseStreamEvent,
} from '@openai/agents-core';
import { Agent, Runner } from '@openai/agents';
class EchoModel implements Model {
name: string;
constructor() {
this.name = 'Echo';
}
async getResponse(request: ModelRequest): Promise<ModelResponse> {
return {
usage: {},
output: [{ role: 'assistant', content: request.input as string }],
} as any;
}
async *getStreamedResponse(
_request: ModelRequest,
): AsyncIterable<ResponseStreamEvent> {
yield {
type: 'response.completed',
response: { output: [], usage: {} },
} as any;
}
}
class EchoProvider implements ModelProvider {
getModel(_modelName?: string): Promise<Model> | Model {
return new EchoModel();
}
}
const runner = new Runner({ modelProvider: new EchoProvider() });
console.log(runner.config.modelProvider.getModel());
const agent = new Agent({
name: 'Test Agent',
instructions: 'You are a helpful assistant.',
model: new EchoModel(),
modelSettings: { temperature: 0.7, toolChoice: 'auto' },
});
console.log(agent.model);

If you want a ready-made adapter for non-OpenAI models, see Using any model with Vercel’s AI SDK.


When using the OpenAI provider you can opt‑in to automatic trace export by providing your API key:

Tracing exporter
import { setTracingExportApiKey } from '@openai/agents';
setTracingExportApiKey('sk-...');

This sends traces to the OpenAI dashboard where you can inspect the complete execution graph of your workflow.