Interface GuardrailTopicConfig.Builder

  • Method Details

    • name

      The name of the topic to deny.

      Parameters:
      name - The name of the topic to deny.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • definition

      GuardrailTopicConfig.Builder definition(String definition)

      A definition of the topic to deny.

      Parameters:
      definition - A definition of the topic to deny.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • examples

      A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

      Parameters:
      examples - A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • examples

      GuardrailTopicConfig.Builder examples(String... examples)

      A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

      Parameters:
      examples - A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • type

      Specifies to deny the topic.

      Parameters:
      type - Specifies to deny the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • type

      Specifies to deny the topic.

      Parameters:
      type - Specifies to deny the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      GuardrailTopicConfig.Builder inputAction(String inputAction)

      Specifies the action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - Specifies the action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      Specifies the action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - Specifies the action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      GuardrailTopicConfig.Builder outputAction(String outputAction)

      Specifies the action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - Specifies the action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      Specifies the action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - Specifies the action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputEnabled

      GuardrailTopicConfig.Builder inputEnabled(Boolean inputEnabled)

      Specifies whether to enable guardrail evaluation on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      inputEnabled - Specifies whether to enable guardrail evaluation on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • outputEnabled

      GuardrailTopicConfig.Builder outputEnabled(Boolean outputEnabled)

      Specifies whether to enable guardrail evaluation on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      outputEnabled - Specifies whether to enable guardrail evaluation on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.