Configuration
This page covers SDK-wide defaults that you usually set once during application startup, such as the default OpenAI key or client, the default OpenAI API shape, tracing export defaults, and logging behavior.
If you need to configure a specific agent or run instead, start with:
- Running agents for
RunConfig, sessions, and conversation-state options. - Models for model selection and provider configuration.
- Tracing for per-run tracing metadata and custom trace processors.
API keys and clients
By default, the SDK uses the OPENAI_API_KEY environment variable for LLM requests and tracing. The key is resolved when the SDK first creates an OpenAI client (lazy initialization), so set the environment variable before your first model call. If you are unable to set that environment variable before your app starts, you can use the set_default_openai_key() function to set the key.
Alternatively, you can also configure an OpenAI client to be used. By default, the SDK creates an AsyncOpenAI instance, using the API key from the environment variable or the default key set above. You can change this by using the set_default_openai_client() function.
from openai import AsyncOpenAI
from agents import set_default_openai_client
custom_client = AsyncOpenAI(base_url="...", api_key="...")
set_default_openai_client(custom_client)
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the set_default_openai_api() function.
Tracing
Tracing is enabled by default. By default it uses the same OpenAI API key as your model requests from the section above (that is, the environment variable or the default key you set). You can specifically set the API key used for tracing by using the set_tracing_export_api_key function.
If you need to attribute traces to a specific organization or project when using the default exporter, set these environment variables before your app starts:
You can also set a tracing API key per run without changing the global exporter.
from agents import Runner, RunConfig
await Runner.run(
agent,
input="Hello",
run_config=RunConfig(tracing={"api_key": "sk-tracing-123"}),
)
You can also disable tracing entirely by using the set_tracing_disabled() function.
If you want to keep tracing enabled but exclude potentially sensitive inputs/outputs from trace payloads, set RunConfig.trace_include_sensitive_data to False:
from agents import Runner, RunConfig
await Runner.run(
agent,
input="Hello",
run_config=RunConfig(trace_include_sensitive_data=False),
)
You can also change the default without code by setting this environment variable before your app starts:
For full tracing controls, see the tracing guide.
Debug logging
The SDK defines two Python loggers (openai.agents and openai.agents.tracing) and does not attach handlers by default. Logs follow your application's Python logging configuration.
To enable verbose logging, use the enable_verbose_stdout_logging() function.
Alternatively, you can customize the logs by adding handlers, filters, formatters, etc. You can read more in the Python logging guide.
import logging
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
# To make info and above show up
logger.setLevel(logging.INFO)
# To make warning and above show up
logger.setLevel(logging.WARNING)
# etc
# You can customize this as needed, but this will output to `stderr` by default
logger.addHandler(logging.StreamHandler())
Sensitive data in logs
Certain logs may contain sensitive data (for example, user data).
By default, the SDK does not log LLM inputs/outputs or tool inputs/outputs. These protections are controlled by:
If you need to include this data temporarily for debugging, set either variable to 0 (or false) before your app starts: