Skip to content

Sandbox clients

Use this page to choose where sandbox work should run. In most cases, the SandboxAgent definition stays the same while the sandbox client and client-specific options change in the sandbox run option.

GoalStart withWhy
Fastest local iteration on macOS or LinuxUnixLocalSandboxClientNo extra service dependency and a simple local filesystem workflow.
Basic container isolationDockerSandboxClientRuns work inside Docker with a specific image.
Hosted execution or production-style isolationA hosted sandbox clientMoves the workspace boundary to a provider-managed environment.

For most users, start with one of these two sandbox clients:

ClientInstallChoose it when
UnixLocalSandboxClientnoneFastest local iteration on macOS or Linux. Good default for local development.
DockerSandboxClientDocker CLI available locallyYou want container isolation or a specific image for local parity.

Unix-local is the easiest way to start developing against a local filesystem. Move to Docker or a hosted provider when you need stronger environment isolation or production-style parity.

To switch from Unix-local to Docker, keep the agent definition the same and change only the client:

Use Docker
import { run } from '@openai/agents';
import { SandboxAgent } from '@openai/agents/sandbox';
import { DockerSandboxClient } from '@openai/agents/sandbox/local';
const agent = new SandboxAgent({
name: 'Workspace reviewer',
model: 'gpt-5.5',
instructions: 'Inspect the sandbox workspace before answering.',
});
const result = await run(agent, 'Inspect the workspace.', {
sandbox: {
client: new DockerSandboxClient({ image: 'node:22-bookworm-slim' }),
},
});
console.log(result.finalOutput);

The same agent can usually run with either local client:

Switch between local clients
import {
DockerSandboxClient,
UnixLocalSandboxClient,
} from '@openai/agents/sandbox/local';
const client = process.env.USE_DOCKER
? new DockerSandboxClient({ image: 'node:22-bookworm-slim' })
: new UnixLocalSandboxClient();

There are two lifecycle styles.

StyleWhat you passWho closes the sessionUse it when
SDK-ownedsandbox: { client }The runnerThe sandbox only needs to live for one run.
Developer-ownedsandbox: { session }Your codeYou need to inspect files afterward, reuse the same live session, or coordinate multiple runs.

When you create a session yourself, close it yourself:

Own the sandbox session lifecycle
import { run } from '@openai/agents';
import { Manifest, SandboxAgent } from '@openai/agents/sandbox';
import { UnixLocalSandboxClient } from '@openai/agents/sandbox/local';
const manifest = new Manifest();
const agent = new SandboxAgent({
name: 'Workspace reviewer',
model: 'gpt-5.5',
instructions: 'Inspect the sandbox workspace before answering.',
});
const client = new UnixLocalSandboxClient();
const session = await client.create({ manifest });
try {
await run(agent, 'First pass.', { sandbox: { session } });
await run(agent, 'Follow-up pass.', { sandbox: { session } });
} finally {
await session.close?.();
}

Sandbox state and conversation state are separate:

  • SDK conversation state lives in result.history, an SDK Session, conversationId, or previousResponseId.
  • Sandbox state lives in the live sandbox session, serialized sessionState, RunState sandbox payloads, or snapshots.

Use sessionState when you want to reconnect to the same backend session through a sandbox client. Use a snapshot when you want a fresh session seeded from saved workspace contents.

Serialize and resume sandbox state
import { Manifest } from '@openai/agents/sandbox';
import { UnixLocalSandboxClient } from '@openai/agents/sandbox/local';
const manifest = new Manifest();
const client = new UnixLocalSandboxClient({
snapshot: { type: 'local', baseDir: '/tmp/my-sandbox-snapshots' },
});
const session = await client.create({ manifest });
const state = await client.serializeSessionState?.(session.state);
await session.close?.();
if (state) {
const restored = await client.resume?.(
await client.deserializeSessionState!(state),
);
await restored?.close?.();
}

RunState can also preserve runner-managed sandbox state when you pause or resume a larger workflow. Use explicit sessionState when the sandbox lifecycle is managed outside a serialized run.

Manifest entries are prepared before the agent runs. You can tune materialization concurrency per run or per client create call:

Tune manifest materialization concurrency
import { run } from '@openai/agents';
import { SandboxAgent } from '@openai/agents/sandbox';
import { UnixLocalSandboxClient } from '@openai/agents/sandbox/local';
const agent = new SandboxAgent({
name: 'Repository inspector',
model: 'gpt-5.5',
instructions: 'Inspect the repository before answering.',
});
await run(agent, 'Inspect the repo.', {
sandbox: {
client: new UnixLocalSandboxClient(),
concurrencyLimits: {
manifestEntries: 4,
localDirFiles: 16,
},
},
});

manifestEntries limits parallel top-level entry work. localDirFiles limits file copy concurrency inside localDir() entries.

Mount entries describe what storage to expose; mount strategies describe how a sandbox backend attaches that storage. Import the built-in mount entries and generic strategies from @openai/agents/sandbox.

Common mount options:

  • mountPath: where the storage appears in the sandbox. Relative paths are resolved under the manifest root; absolute paths are used as-is.
  • readOnly: set this when the sandbox should not write back to the mounted storage.
  • mountStrategy: use a strategy that matches both the mount entry and the sandbox backend.

Mounts are treated as ephemeral workspace entries. Snapshot and persistence flows detach or skip mounted paths instead of copying mounted remote storage into the saved workspace.

Generic local/container strategies:

Strategy or patternUse it whenNotes
inContainerMountStrategy(...)The sandbox image can run a mount command such as rclone, mount-s3, or blobfuse2.Available as a generic strategy; support depends on the backend.
dockerVolumeMountStrategy(...)Docker should attach a volume-driver-backed mount before the container starts.Docker-only.
localBindMountStrategy()A local backend should bind an absolute local path into the workspace.Supported by local workspace materialization where allowed.

Backend support is intentionally explicit:

BackendMount notes
UnixLocalSandboxClientSupports local bind-style mounts through the local workspace model.
DockerSandboxClientSupports local bind mounts and Docker volume-style strategies where Docker can attach the storage.
Hosted providersProvider-specific strategies live with each provider implementation. Check that provider’s docs for supported mounts and required setup.

Do not assume a mount entry works on every backend. If a client cannot enforce manifest metadata, identity, or mount behavior, it should fail early instead of silently ignoring that part of the manifest.

When you need a hosted environment, the same SandboxAgent definition usually carries over and only the sandbox client changes in the sandbox run option.

Hosted provider implementations are available from @openai/agents-extensions provider subpaths. Check the provider’s docs for exact environment variables, runnable examples, port behavior, PTY support, snapshot behavior, and cleanup behavior.

Install @openai/agents-extensions and satisfy its package-level peers. Each provider may also require a provider SDK package or backend setup:

ClientImport pathProvider requirement
BlaxelSandboxClient@openai/agents-extensions/sandbox/blaxelnpm peer: @blaxel/core
CloudflareSandboxClient@openai/agents-extensions/sandbox/cloudflareCloudflare Sandbox bridge Worker URL and Worker auth
DaytonaSandboxClient@openai/agents-extensions/sandbox/daytonanpm peer: @daytonaio/sdk
E2BSandboxClient@openai/agents-extensions/sandbox/e2bnpm peer: e2b or @e2b/code-interpreter
ModalSandboxClient@openai/agents-extensions/sandbox/modalnpm peer: modal
RunloopSandboxClient@openai/agents-extensions/sandbox/runloopnpm peer: @runloop/api-client
VercelSandboxClient@openai/agents-extensions/sandbox/vercelnpm peer: @vercel/sandbox

CloudflareSandboxClient does not import a Cloudflare npm SDK. It talks to a deployed Cloudflare Sandbox bridge Worker over HTTP instead.

Hosted sandbox clients expose provider-specific mount strategies. Choose the backend and mount strategy that best fit your storage provider:

BackendMount notes
DockerSupports s3Mount(), gcsMount(), r2Mount(), azureBlobMount(), boxMount(), and s3FilesMount() with local strategies such as inContainerMountStrategy() and dockerVolumeMountStrategy().
ModalSandboxClientSupports cloud bucket mounts with ModalCloudBucketMountStrategy on S3, R2, and HMAC-authenticated GCS mount entries.
CloudflareSandboxClientSupports Cloudflare bucket mounts with CloudflareBucketMountStrategy on S3, R2, and HMAC-authenticated GCS mount entries.
BlaxelSandboxClientSupports cloud bucket mounts with BlaxelCloudBucketMountStrategy on S3, R2, and GCS mount entries. It also supports persistent Blaxel Drives with BlaxelDriveMount and BlaxelDriveMountStrategy.
DaytonaSandboxClientSupports rclone-backed mounts with DaytonaCloudBucketMountStrategy on S3, GCS, R2, Azure Blob, and Box mount entries.
E2BSandboxClientSupports rclone-backed mounts with E2BCloudBucketMountStrategy on S3, GCS, R2, Azure Blob, and Box mount entries.
RunloopSandboxClientSupports rclone-backed mounts with RunloopCloudBucketMountStrategy on S3, GCS, R2, Azure Blob, and Box mount entries.
VercelSandboxClientNo hosted-specific mount strategy is currently exposed. Use manifest files, repos, snapshots, or other workspace inputs instead.

The table below summarizes which remote storage entries each backend can mount directly:

BackendAWS S3Cloudflare R2GCSAzure Blob StorageBoxS3 Files
Dockeryesyesyesyesyesyes
ModalSandboxClientyesyesyesnonono
CloudflareSandboxClientyesyesyesnonono
BlaxelSandboxClientyesyesyesnonono
DaytonaSandboxClientyesyesyesyesyesno
E2BSandboxClientyesyesyesyesyesno
RunloopSandboxClientyesyesyesyesyesno
VercelSandboxClientnononononono

Sandbox clients can expose endpoints through resolveExposedPort(port) when the backend supports it.

ClientBehavior
UnixLocalSandboxClientResolves configured ports to 127.0.0.1.
DockerSandboxClientPublishes configured container ports and resolves their host endpoints.

Declare the ports in the client options when you need a backend to enforce an allowlist:

Expose a port
import { DockerSandboxClient } from '@openai/agents/sandbox/local';
const client = new DockerSandboxClient({
image: 'node:22-bookworm-slim',
exposedPorts: [3000],
});
CapabilityUnix-localDocker
exec_commandSupportedSupported
PTY write_stdinSupportedSupported
apply_patchSupportedSupported through workspace file APIs
view_imageSupportedSupported through workspace file APIs
runAs for commandsSupported when the host can resolve and switch to the userLimited by container/user setup
Local snapshotsSupportedSupported
Local/Docker mountsLocal bind-style supportBind and Docker volume-style support

Local PTY support uses a small Python 3 bridge in the SDK process. The bridge is only used for tty: true sessions, where Node.js does not provide a built-in PTY API and the SDK needs standard POSIX PTY behavior for interactive stdin, signal handling, and exit status reporting. Install python3 in the environment that runs your SDK code, or set OPENAI_AGENTS_PYTHON to a Python 3 executable. This is separate from the Python version, if any, installed inside a Docker sandbox image.

Hosted provider support varies by provider. Check the provider-specific docs for exact options, environment variables, port behavior, PTY support, snapshot behavior, and cleanup behavior.