ModelSettings
type ModelSettings = object;
Settings to use when calling an LLM.
This class holds optional model configuration parameters (e.g. temperature, topP, penalties, truncation, etc.).
Not all models/providers support all of these parameters, so please check the API documentation for the specific model and provider you are using.
Properties
Section titled “Properties”frequencyPenalty?
Section titled “frequencyPenalty?”optional frequencyPenalty: number;
The frequency penalty to use when calling the model.
maxTokens?
Section titled “maxTokens?”optional maxTokens: number;
The maximum number of output tokens to generate.
parallelToolCalls?
Section titled “parallelToolCalls?”optional parallelToolCalls: boolean;
Whether to use parallel tool calls when calling the model. Defaults to false if not provided.
presencePenalty?
Section titled “presencePenalty?”optional presencePenalty: number;
The presence penalty to use when calling the model.
providerData?
Section titled “providerData?”optional providerData: Record<string, any>;
Additional provider specific settings to be passed directly to the model request.
store?
Section titled “store?”optional store: boolean;
Whether to store the generated model response for later retrieval. Defaults to true if not provided.
temperature?
Section titled “temperature?”optional temperature: number;
The temperature to use when calling the model.
toolChoice?
Section titled “toolChoice?”optional toolChoice: ModelSettingsToolChoice;
The tool choice to use when calling the model.
optional topP: number;
The topP to use when calling the model.
truncation?
Section titled “truncation?”optional truncation: "auto" | "disabled";
The truncation strategy to use when calling the model.