Builder

class Builder

Properties

Link copied to clipboard

Specifies the action to take when harmful content is detected. Supported values include:

Link copied to clipboard

Specifies whether to enable guardrail evaluation on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

Link copied to clipboard

The input modalities selected for the guardrail content filter configuration.

Link copied to clipboard

The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

Link copied to clipboard

Specifies the action to take when harmful content is detected in the output. Supported values include:

Link copied to clipboard

Specifies whether to enable guardrail evaluation on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

Link copied to clipboard

The output modalities selected for the guardrail content filter configuration.

Link copied to clipboard

The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

Link copied to clipboard

The harmful category that the content filter is applied to.