Models
Every Agent ultimately calls an LLM. The SDK abstracts models behind two lightweight interfaces:
Model
– knows how to make one request against a specific API.ModelProvider
– resolves human‑readable model names (e.g.'gpt‑4o'
) toModel
instances.
In day‑to‑day work you normally only interact with model names and occasionally
ModelSettings
.
import { Agent } from '@openai/agents';
const agent = new Agent({ name: 'Creative writer', model: 'gpt-4.1',});
The OpenAI provider
Section titled “The OpenAI provider”The default ModelProvider
resolves names using the OpenAI APIs. It supports two distinct
endpoints:
API | Usage | Call setOpenAIAPI() |
---|---|---|
Chat Completions | Standard chat & function calls | setOpenAIAPI('chat_completions') |
Responses | New streaming‑first generative API (tool calls, flexible outputs) | setOpenAIAPI('responses') (default) |
Authentication
Section titled “Authentication”import { setDefaultOpenAIKey } from '@openai/agents';
setDefaultOpenAIKey(process.env.OPENAI_API_KEY!); // sk-...
You can also plug your own OpenAI
client via setDefaultOpenAIClient(client)
if you need
custom networking settings.
Default model
Section titled “Default model”The OpenAI provider defaults to gpt‑4o
. Override per agent or globally:
import { Runner } from '@openai/agents';
const runner = new Runner({ model: 'gpt‑4.1-mini' });
ModelSettings
Section titled “ModelSettings”ModelSettings
mirrors the OpenAI parameters but is provider‑agnostic.
Field | Type | Notes |
---|---|---|
temperature | number | Creativity vs. determinism. |
topP | number | Nucleus sampling. |
frequencyPenalty | number | Penalise repeated tokens. |
presencePenalty | number | Encourage new tokens. |
toolChoice | 'auto' | 'required' | 'none' | string | See forcing tool use. |
parallelToolCalls | boolean | Allow parallel function calls where supported. |
truncation | 'auto' | 'disabled' | Token truncation strategy. |
maxTokens | number | Maximum tokens in the response. |
store | boolean | Persist the response for retrieval / RAG workflows. |
Attach settings at either level:
import { Runner, Agent } from '@openai/agents';
const agent = new Agent({ name: 'Creative writer', // ... modelSettings: { temperature: 0.7, toolChoice: 'auto' },});
// or globallynew Runner({ modelSettings: { temperature: 0.3 } });
Runner
‑level settings override any conflicting per‑agent settings.
Prompt
Section titled “Prompt”Agents can be configured with a prompt
parameter, indicating a server-stored
prompt configuration that should be used to control the Agent’s behavior. Currently,
this option is only supported when you use the OpenAI
Responses API.
Field | Type | Notes |
---|---|---|
promptId | string | Unique identifier for a prompt. |
version | string | Version of the prompt you wish to use. |
variables | object | A key/value pair of variables to substitute into the prompt. Values can be strings or content input types like text, images, or files. |
import { Agent, run } from '@openai/agents';
async function main() { const agent = new Agent({ name: 'Assistant', prompt: { promptId: 'pmpt_684b3b772e648193b92404d7d0101d8a07f7a7903e519946', version: '1', variables: { poem_style: 'limerick', }, }, });
const result = await run(agent, 'Write about unrequited love.'); console.log(result.finalOutput);}
if (require.main === module) { main().catch(console.error);}
Any additional agent configuration, like tools or instructions, will override the values you may have configured in your stored prompt.
Custom model providers
Section titled “Custom model providers”Implementing your own provider is straightforward – implement ModelProvider
and Model
and
pass the provider to the Runner
constructor:
import { ModelProvider, Model, ModelRequest, ModelResponse, ResponseStreamEvent,} from '@openai/agents-core';
import { Agent, Runner } from '@openai/agents';
class EchoModel implements Model { name: string; constructor() { this.name = 'Echo'; } async getResponse(request: ModelRequest): Promise<ModelResponse> { return { usage: {}, output: [{ role: 'assistant', content: request.input as string }], } as any; } async *getStreamedResponse( _request: ModelRequest, ): AsyncIterable<ResponseStreamEvent> { yield { type: 'response.completed', response: { output: [], usage: {} }, } as any; }}
class EchoProvider implements ModelProvider { getModel(_modelName?: string): Promise<Model> | Model { return new EchoModel(); }}
const runner = new Runner({ modelProvider: new EchoProvider() });console.log(runner.config.modelProvider.getModel());const agent = new Agent({ name: 'Test Agent', instructions: 'You are a helpful assistant.', model: new EchoModel(), modelSettings: { temperature: 0.7, toolChoice: 'auto' },});console.log(agent.model);
Tracing exporter
Section titled “Tracing exporter”When using the OpenAI provider you can opt‑in to automatic trace export by providing your API key:
import { setTracingExportApiKey } from '@openai/agents';
setTracingExportApiKey('sk-...');
This sends traces to the OpenAI dashboard where you can inspect the complete execution graph of your workflow.
Next steps
Section titled “Next steps”- Explore running agents.
- Give your models super‑powers with tools.
- Add guardrails or tracing as needed.