Skip to content

Model Context Protocol (MCP)

The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. From the MCP docs:

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

There are three types of MCP servers this SDK supports:

  1. Hosted MCP server tools – remote MCP servers used as tools by the OpenAI Responses API
  2. Streamable HTTP MCP servers – local or remote servers that implement the Streamable HTTP transport
  3. Stdio MCP servers – servers accessed via standard input/output (the simplest option)

Note: The SDK also includes MCPServerSSE for legacy Server‑Sent Events transports, but SSE has been deprecated by the MCP project. Prefer Streamable HTTP or stdio for new integrations.

Choose a server type based on your use‑case:

What you needRecommended option
Call publicly accessible remote servers with default OpenAI responses models1. Hosted MCP tools
Use publicly accessible remote servers but have the tool calls triggered locally2. Streamable HTTP
Use locally running Streamable HTTP servers2. Streamable HTTP
Use any Streamable HTTP servers with non-OpenAI-Responses models2. Streamable HTTP
Work with local MCP servers that only support the standard-I/O protocol3. Stdio

Hosted tools push the entire round‑trip into the model. Instead of your code calling an MCP server, the OpenAI Responses API invokes the remote tool endpoint and streams the result back to the model.

Here is the simplest example of using hosted MCP tools. You can pass the remote MCP server’s label and URL to the hostedMcpTool utility function, which is helpful for creating hosted MCP server tools.

hostedAgent.ts
import { Agent, hostedMcpTool } from '@openai/agents';
export const agent = new Agent({
name: 'MCP Assistant',
instructions: 'You must always use the MCP tools to answer questions.',
tools: [
hostedMcpTool({
serverLabel: 'gitmcp',
serverUrl: 'https://gitmcp.io/openai/codex',
}),
],
});

Then, you can run the Agent with the run function (or your own customized Runner instance’s run method):

Run with hosted MCP tools
import { run } from '@openai/agents';
import { agent } from './hostedAgent';
async function main() {
const result = await run(
agent,
'Which language is the repo I pointed in the MCP tool settings written in?',
);
console.log(result.finalOutput);
}
main().catch(console.error);

To stream incremental MCP results, pass stream: true when you run the Agent:

Run with hosted MCP tools (streaming)
import { run } from '@openai/agents';
import { agent } from './hostedAgent';
async function main() {
const result = await run(
agent,
'Which language is the repo I pointed in the MCP tool settings written in?',
{ stream: true },
);
for await (const event of result) {
if (
event.type === 'raw_model_stream_event' &&
event.data.type === 'model' &&
event.data.event.type !== 'response.mcp_call_arguments.delta' &&
event.data.event.type !== 'response.output_text.delta'
) {
console.log(`Got event of type ${JSON.stringify(event.data)}`);
}
}
console.log(`Done streaming; final result: ${result.finalOutput}`);
}
main().catch(console.error);

For sensitive operations you can require human approval of individual tool calls. Pass either requireApproval: 'always' or a fine‑grained object mapping tool names to 'never'/'always'.

If you can programmatically determine whether a tool call is safe, you can use the onApproval callback to approve or reject the tool call. If you require human approval, you can use the same human-in-the-loop (HITL) approach using interruptions as for local function tools.

Human in the loop with hosted MCP tools
import { Agent, run, hostedMcpTool, RunToolApprovalItem } from '@openai/agents';
async function main(): Promise<void> {
const agent = new Agent({
name: 'MCP Assistant',
instructions: 'You must always use the MCP tools to answer questions.',
tools: [
hostedMcpTool({
serverLabel: 'gitmcp',
serverUrl: 'https://gitmcp.io/openai/codex',
// 'always' | 'never' | { never, always }
requireApproval: {
never: {
toolNames: ['search_codex_code', 'fetch_codex_documentation'],
},
always: {
toolNames: ['fetch_generic_url_content'],
},
},
}),
],
});
let result = await run(agent, 'Which language is this repo written in?');
while (result.interruptions && result.interruptions.length) {
for (const interruption of result.interruptions) {
// Human in the loop here
const approval = await confirm(interruption);
if (approval) {
result.state.approve(interruption);
} else {
result.state.reject(interruption);
}
}
result = await run(agent, result.state);
}
console.log(result.finalOutput);
}
import { stdin, stdout } from 'node:process';
import * as readline from 'node:readline/promises';
async function confirm(item: RunToolApprovalItem): Promise<boolean> {
const rl = readline.createInterface({ input: stdin, output: stdout });
const name = item.name;
const params = item.arguments;
const answer = await rl.question(
`Approve running tool (mcp: ${name}, params: ${params})? (y/n) `,
);
rl.close();
return answer.toLowerCase().trim() === 'y';
}
main().catch(console.error);

hostedMcpTool(...) supports both MCP server URLs and connector-backed servers:

OptionTypeNotes
serverLabelstringRequired label that identifies the hosted MCP server in events and traces.
serverUrlstringRemote MCP server URL (use this for regular hosted MCP servers).
connectorIdstringOpenAI connector id (use this instead of serverUrl for connector-backed hosted servers).
authorizationstringOptional authorization token sent to the hosted MCP backend.
headersRecord<string, string>Optional extra request headers.
allowedToolsstring[] | objectAllowlist of tool names exposed to the model. Pass string[] or { toolNames?: string[] }.
requireApproval'never' | 'always' | objectApproval policy for hosted MCP tool calls. Use the object form for per-tool overrides. Defaults to 'never'.
onApproval(context, item) => Promise<{ approve: boolean; reason?: string }>Optional callback for programmatic approval/rejection when requireApproval requires approval handling.

requireApproval object form:

{
always?: { toolNames: string[] };
never?: { toolNames: string[] };
}

Hosted MCP also supports OpenAI connectors. Instead of providing a serverUrl, pass the connector’s connectorId and an authorization token. The Responses API then handles authentication and exposes the connector’s tools through the hosted MCP interface.

Connector-backed hosted MCP tool
import { Agent, hostedMcpTool } from '@openai/agents';
const authorization = process.env.GOOGLE_CALENDAR_AUTHORIZATION!;
export const connectorAgent = new Agent({
name: 'Calendar Assistant',
instructions:
"You are a helpful assistant that can answer questions about the user's calendar.",
tools: [
hostedMcpTool({
serverLabel: 'google_calendar',
connectorId: 'connector_googlecalendar',
authorization,
requireApproval: 'never',
}),
],
});

In this example the GOOGLE_CALENDAR_AUTHORIZATION environment variable holds an OAuth token obtained from the Google OAuth Playground, which authorizes the connector-backed server to call the Calendar API. For a runnable sample that also demonstrates streaming, see examples/connectors.

Fully working samples (Hosted tools/Streamable HTTP/stdio + Streaming, HITL, onApproval) are examples/mcp in our GitHub repository.

When your Agent talks directly to a Streamable HTTP MCP server—local or remote—instantiate MCPServerStreamableHttp with the server url, name, and any optional settings:

Run with Streamable HTTP MCP servers
import { Agent, run, MCPServerStreamableHttp } from '@openai/agents';
async function main() {
const mcpServer = new MCPServerStreamableHttp({
url: 'https://gitmcp.io/openai/codex',
name: 'GitMCP Documentation Server',
});
const agent = new Agent({
name: 'GitMCP Assistant',
instructions: 'Use the tools to respond to user requests.',
mcpServers: [mcpServer],
});
try {
await mcpServer.connect();
const result = await run(agent, 'Which language is this repo written in?');
console.log(result.finalOutput);
} finally {
await mcpServer.close();
}
}
main().catch(console.error);

Constructor options:

OptionTypeNotes
urlstringStreamable HTTP server URL.
namestringOptional label for the server.
cacheToolsListbooleanCache tools list to reduce latency.
clientSessionTimeoutSecondsnumberTimeout for MCP client sessions.
toolFilterMCPToolFilterCallable | MCPToolFilterStaticFilter available tools.
toolMetaResolverMCPToolMetaResolverInject per-call MCP _meta request fields.
errorFunctionMCPToolErrorFunction | nullMap MCP call failures to model-visible text.
timeoutnumberPer-request timeout (milliseconds).
loggerLoggerCustom logger.
authProviderOAuthClientProviderOAuth provider from the MCP TypeScript SDK.
requestInitRequestInitFetch init options for requests.
fetchFetchLikeCustom fetch implementation.
reconnectionOptionsStreamableHTTPReconnectionOptionsReconnection tuning options.
sessionIdstringExplicit session id for MCP connections.

The constructor also accepts additional MCP TypeScript‑SDK options such as authProvider, requestInit, fetch, reconnectionOptions, and sessionId. See the MCP TypeScript SDK repository and its documents for details.

For servers that expose only standard I/O, instantiate MCPServerStdio with a fullCommand:

Run with Stdio MCP servers
import { Agent, run, MCPServerStdio } from '@openai/agents';
import * as path from 'node:path';
async function main() {
const samplesDir = path.join(__dirname, 'sample_files');
const mcpServer = new MCPServerStdio({
name: 'Filesystem MCP Server, via npx',
fullCommand: `npx -y @modelcontextprotocol/server-filesystem ${samplesDir}`,
});
await mcpServer.connect();
try {
const agent = new Agent({
name: 'FS MCP Assistant',
instructions:
'Use the tools to read the filesystem and answer questions based on those files. If you are unable to find any files, you can say so instead of assuming they exist.',
mcpServers: [mcpServer],
});
const result = await run(agent, 'Read the files and list them.');
console.log(result.finalOutput);
} finally {
await mcpServer.close();
}
}
main().catch(console.error);

Constructor options:

OptionTypeNotes
command / argsstring / string[]Command + args for stdio servers.
fullCommandstringFull command string alternative to command + args.
envRecord<string, string>Environment variables for the server process.
cwdstringWorking directory for the server process.
cacheToolsListbooleanCache tools list to reduce latency.
clientSessionTimeoutSecondsnumberTimeout for MCP client sessions.
namestringOptional label for the server.
encodingstringEncoding for stdio streams.
encodingErrorHandler'strict' | 'ignore' | 'replace'Encoding error strategy.
toolFilterMCPToolFilterCallable | MCPToolFilterStaticFilter available tools.
toolMetaResolverMCPToolMetaResolverInject per-call MCP _meta request fields.
errorFunctionMCPToolErrorFunction | nullMap MCP call failures to model-visible text.
timeoutnumberPer-request timeout (milliseconds).
loggerLoggerCustom logger.

When you work with multiple MCP servers, you can use connectMcpServers to connect them together, track failures, and close them in one place. The helper returns an MCPServers instance with active, failed, and errors collections so you can pass only healthy servers to your agent.

Manage multiple MCP servers
import {
Agent,
MCPServerStreamableHttp,
connectMcpServers,
run,
} from '@openai/agents';
async function main() {
const servers = [
new MCPServerStreamableHttp({
url: 'https://mcp.deepwiki.com/mcp',
name: 'DeepWiki MCP Server',
}),
new MCPServerStreamableHttp({
url: 'http://localhost:8001/mcp',
name: 'Local MCP Server',
}),
];
const mcpServers = await connectMcpServers(servers, {
connectInParallel: true,
});
try {
console.log(`Active servers: ${mcpServers.active.length}`);
console.log(`Failed servers: ${mcpServers.failed.length}`);
for (const [server, error] of mcpServers.errors) {
console.warn(`${server.name} failed to connect: ${error.message}`);
}
const agent = new Agent({
name: 'MCP lifecycle agent',
instructions: 'Use MCP tools to answer user questions.',
mcpServers: mcpServers.active,
});
const result = await run(
agent,
'Which language is the openai/codex repository written in?',
);
console.log(result.finalOutput);
} finally {
await mcpServers.close();
}
}
main().catch(console.error);

Use cases:

  • Multiple servers at once: connect everything in parallel and use mcpServers.active for the agent.
  • Partial failure handling: inspect failed + errors and decide whether to continue or retry.
  • Retry failed servers: call mcpServers.reconnect() (defaults to retrying failed servers only).

If you want a strict “all or nothing” connection or different timeouts, use connectMcpServers(servers, options) and tune the options for your environment.

connectMcpServers options:

OptionTypeDefaultNotes
connectTimeoutMsnumber | null10000Timeout for each server connect(). Use null to disable.
closeTimeoutMsnumber | null10000Timeout for each server close(). Use null to disable.
dropFailedbooleantrueExclude failed servers from active.
strictbooleanfalseThrow if any server fails to connect.
suppressAbortErrorbooleantrueIgnore abort-like errors while still tracking failed servers.
connectInParallelbooleanfalseConnect all servers concurrently instead of sequentially.

mcpServers.reconnect(options) supports:

OptionTypeDefaultNotes
failedOnlybooleantrueRetry only failed servers (true) or reconnect all servers (false).

If your runtime supports Symbol.asyncDispose, MCPServers also supports the await using pattern. In TypeScript, enable esnext.disposable in tsconfig.json:

{
"compilerOptions": {
"lib": ["ES2018", "DOM", "esnext.disposable"]
}
}

Then you can write:

await using mcpServers = await connectMcpServers(servers);

For Streamable HTTP and Stdio servers, each time an Agent runs it may call list_tools() to discover available tools. Because that round‑trip can add latency—especially to remote servers—you can cache the results in memory by passing cacheToolsList: true to MCPServerStdio or MCPServerStreamableHttp.

Only enable this if you’re confident the tool list won’t change. To invalidate the cache later, call invalidateToolsCache() on the server instance. If you are using shared MCP tool caching via getAllMcpTools(...), you can also invalidate by server name with invalidateServerToolsCache(serverName).

For advanced cases, getAllMcpTools({ generateMCPToolCacheKey }) lets you customize cache partitioning (for example, by server + agent + run context).

You can restrict which tools are exposed from each server by passing either a static filter via createMCPToolStaticFilter or a custom function. Here’s a combined example showing both approaches:

Tool filtering
import {
MCPServerStdio,
MCPServerStreamableHttp,
createMCPToolStaticFilter,
MCPToolFilterContext,
} from '@openai/agents';
interface ToolFilterContext {
allowAll: boolean;
}
const server = new MCPServerStdio({
fullCommand: 'my-server',
toolFilter: createMCPToolStaticFilter({
allowed: ['safe_tool'],
blocked: ['danger_tool'],
}),
});
const dynamicServer = new MCPServerStreamableHttp({
url: 'http://localhost:3000',
toolFilter: async ({ runContext }: MCPToolFilterContext, tool) =>
(runContext.context as ToolFilterContext).allowAll || tool.name !== 'admin',
});