Skip to content

AI SDK Integration

Out of the box the Agents SDK works with OpenAI models through the Responses API or Chat Completions API. However, if you would like to use another model, the Vercel AI SDK offers a range of supported models that can be brought into the Agents SDK through this adapter.

  1. Install the AI SDK adapter by installing the extensions package:

    Terminal window
    npm install @openai/agents-extensions
  2. Choose your desired model package from the Vercel’s AI SDK and install it:

    Terminal window
    npm install @ai-sdk/openai
  3. Import the adapter and model to connect to your agent:

    Import the adapter
    import { openai } from '@ai-sdk/openai';
    import { aisdk } from '@openai/agents-extensions/ai-sdk';
  4. Initialize an instance of the model to be used by the agent:

    Create the model
    import { openai } from '@ai-sdk/openai';
    import { aisdk } from '@openai/agents-extensions/ai-sdk';
    const model = aisdk(openai('gpt-5.4'));
AI SDK Setup
import { Agent, run } from '@openai/agents';
// Import the model package you installed
import { openai } from '@ai-sdk/openai';
// Import the adapter
import { aisdk } from '@openai/agents-extensions/ai-sdk';
// Create a model instance to be used by the agent
const model = aisdk(openai('gpt-5.4'));
// Create an agent with the model
const agent = new Agent({
name: 'My Agent',
instructions: 'You are a helpful assistant.',
model,
});
// Run the agent with the new model
run(agent, 'What is the capital of Germany?');

If you need to send provider-specific options with a message, pass them through providerMetadata. The values are forwarded directly to the underlying AI SDK model. For example, the following providerData in the Agents SDK

Agents SDK providerData
const providerData = {
anthropic: {
cacheControl: {
type: 'ephemeral',
},
},
};

would become

AI SDK providerMetadata
const providerMetadata = {
anthropic: {
cacheControl: {
type: 'ephemeral',
},
},
};

when using the AI SDK integration.

Some providers return structured output as plain text with extra wrapping, such as JSON code fences. If you need provider-specific cleanup before the Agents runtime validates the final output, pass transformOutputText when creating the adapter:

Normalize finalized output text
import { openai } from '@ai-sdk/openai';
import { aisdk } from '@openai/agents-extensions/ai-sdk';
const model = aisdk(openai('gpt-5.4'), {
transformOutputText(text) {
return text.match(/```(?:json)?\s*([\s\S]*?)\s*```/)?.[1]?.trim() ?? text;
},
});

transformOutputText runs on finalized assistant text for non-streamed responses and on the final response_done event for streamed responses. It does not modify incremental output_text_delta events.

modelSettings.retry works with AI SDK-backed models too, because retries are implemented by the Agents runtime rather than only by the default OpenAI provider.

That means you can attach the same retry configuration you would use elsewhere:

  • Set modelSettings.retry on the Agent, Runner, or both.
  • Compose retryPolicies such as networkError(), httpStatus([...]), or providerSuggested().
  • Keep in mind that providerSuggested() only helps when the wrapped AI SDK model can surface retry advice through the adapter.

For a complete example using aisdk(openai(...)), see examples/ai-sdk/retry.ts. For the retry API itself, including safety boundaries for streaming and stateful follow-up requests, see the Models guide.

There are two related integrations in @openai/agents-extensions:

  • @openai/agents-extensions/ai-sdk adapts an AI SDK model so an Agent can run on it.
  • @openai/agents-extensions/ai-sdk-ui adapts a streamed Agents SDK run so AI SDK UI routes can return a standard streaming Response.
  • The @openai/agents-extensions/ai-sdk adapter is still in beta, so it is worth testing carefully with your chosen provider, especially smaller ones.
  • If you are using OpenAI models, prefer the default OpenAI model provider instead of this adapter.
  • Supported AI SDK providers must expose specificationVersion v2 or v3. If you need the older v1 provider style, copy the module from examples/ai-sdk-v1 into your project.
  • Deferred Responses tool-loading flows are not supported here. That includes toolNamespace(), function tools with deferLoading: true, and toolSearchTool(). If you need tool search, use an OpenAI Responses model directly. See the Tools guide and Models guide.

@openai/agents-extensions/ai-sdk-ui provides response helpers for wiring Agents SDK streams into AI SDK UI routes:

  • createAiSdkTextStreamResponse(source, options?) for plain text streaming responses.
  • createAiSdkUiMessageStreamResponse(source, options?) for UIMessageChunk streaming responses.

Both helpers accept a StreamedRunResult, stream-like source, or compatible wrapper object and return a Response with streaming-friendly headers.

Use createAiSdkUiMessageStreamResponse(...) when your UI needs structured chunks such as tool calls or reasoning parts. Use createAiSdkTextStreamResponse(...) when you only want plain text.

Example Next.js route for UI message streaming:

UI message stream response
import { Agent, run } from '@openai/agents';
import { createAiSdkUiMessageStreamResponse } from '@openai/agents-extensions/ai-sdk-ui';
const agent = new Agent({
name: 'Assistant',
instructions: 'Reply with a short answer.',
});
export async function POST() {
const stream = await run(agent, 'Hello there.', { stream: true });
return createAiSdkUiMessageStreamResponse(stream);
}

Example Next.js route for text-only streaming:

Text stream response
import { Agent, run } from '@openai/agents';
import { createAiSdkTextStreamResponse } from '@openai/agents-extensions/ai-sdk-ui';
const agent = new Agent({
name: 'Assistant',
instructions: 'Reply with a short answer.',
});
export async function POST() {
const stream = await run(agent, 'Hello there.', { stream: true });
return createAiSdkTextStreamResponse(stream);
}

For end-to-end usage, see the examples/ai-sdk-ui app in this repository.