Builder
Properties
Specifies the action to take when harmful content is detected. Supported values include:
Specifies whether to enable guardrail evaluation on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
The input modalities selected for the guardrail content filter configuration.
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
Specifies the action to take when harmful content is detected in the output. Supported values include:
Specifies whether to enable guardrail evaluation on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
The output modalities selected for the guardrail content filter configuration.
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
The harmful category that the content filter is applied to.