传输机制
WebRTC 连接
Section titled “WebRTC 连接”默认传输层使用 WebRTC。音频会从麦克风录制并自动回放。
若要使用自定义的媒体流或音频元素,请在创建会话时提供一个 OpenAIRealtimeWebRTC
实例。
import { RealtimeAgent, RealtimeSession, OpenAIRealtimeWebRTC } from '@openai/agents/realtime';
const agent = new RealtimeAgent({ name: 'Greeter', instructions: 'Greet the user with cheer and answer questions.',});
async function main() { const transport = new OpenAIRealtimeWebRTC({ mediaStream: await navigator.mediaDevices.getUserMedia({ audio: true }), audioElement: document.createElement('audio'), });
const customSession = new RealtimeSession(agent, { transport });}
WebSocket 连接
Section titled “WebSocket 连接”在创建会话时传入 transport: 'websocket'
或一个 OpenAIRealtimeWebSocket
实例,即可使用 WebSocket 连接替代 WebRTC。这非常适合服务器端用例,例如使用 Twilio 构建电话智能体。
import { RealtimeAgent, RealtimeSession } from '@openai/agents/realtime';
const agent = new RealtimeAgent({ name: 'Greeter', instructions: 'Greet the user with cheer and answer questions.',});
const myRecordedArrayBuffer = new ArrayBuffer(0);
const wsSession = new RealtimeSession(agent, { transport: 'websocket', model: 'gpt-realtime',});await wsSession.connect({ apiKey: process.env.OPENAI_API_KEY! });
wsSession.on('audio', (event) => { // event.data is a chunk of PCM16 audio});
wsSession.sendAudio(myRecordedArrayBuffer);
可使用任意录制/回放库处理原始 PCM16 音频字节。
Cloudflare Workers(workerd)注意事项
Section titled “Cloudflare Workers(workerd)注意事项”Cloudflare Workers 和其他 workerd 运行时无法使用全局的 WebSocket
构造函数发起出站 WebSocket。请使用扩展包中的 Cloudflare 传输,它会在内部执行基于 fetch()
的升级。
import { CloudflareRealtimeTransportLayer } from '@openai/agents-extensions';import { RealtimeAgent, RealtimeSession } from '@openai/agents/realtime';
const agent = new RealtimeAgent({ name: 'My Agent',});
// Create a transport that connects to OpenAI Realtime via Cloudflare/workerd's fetch-based upgrade.const cfTransport = new CloudflareRealtimeTransportLayer({ url: 'wss://api.openai.com/v1/realtime?model=gpt-realtime',});
const session = new RealtimeSession(agent, { // Set your own transport. transport: cfTransport,});
自定义传输机制
Section titled “自定义传输机制”如果你想使用不同的语音到语音 API,或需要自定义的传输机制,可实现 RealtimeTransportLayer
接口并发出 RealtimeTransportEventTypes
事件来创建自己的传输层。
与 Realtime API 的更直接交互
Section titled “与 Realtime API 的更直接交互”如果你想使用 OpenAI Realtime API,同时更直接地访问 Realtime API,有两种方式:
选项 1 - 访问传输层
Section titled “选项 1 - 访问传输层”如果你仍希望利用 RealtimeSession
的全部能力,你可以通过 session.transport
访问传输层。
传输层会在 *
事件下发出它接收到的每个事件,你也可以使用 sendEvent()
方法发送原始事件。
import { RealtimeAgent, RealtimeSession } from '@openai/agents/realtime';
const agent = new RealtimeAgent({ name: 'Greeter', instructions: 'Greet the user with cheer and answer questions.',});
const session = new RealtimeSession(agent, { model: 'gpt-realtime',});
session.transport.on('*', (event) => { // JSON parsed version of the event received on the connection});
// Send any valid event as JSON. For example triggering a new responsesession.transport.sendEvent({ type: 'response.create', // ...});
选项 2 — 仅使用传输层
Section titled “选项 2 — 仅使用传输层”如果不需要自动工具执行、护栏等功能,你也可以把传输层当作仅管理连接与中断的“轻量”客户端来使用。
import { OpenAIRealtimeWebRTC } from '@openai/agents/realtime';
const client = new OpenAIRealtimeWebRTC();const audioBuffer = new ArrayBuffer(0);
await client.connect({ apiKey: '<api key>', model: 'gpt-4o-mini-realtime-preview', initialSessionConfig: { instructions: 'Speak like a pirate', voice: 'ash', modalities: ['text', 'audio'], inputAudioFormat: 'pcm16', outputAudioFormat: 'pcm16', },});
// optionally for WebSocketsclient.on('audio', (newAudio) => {});
client.sendAudio(audioBuffer);