Interface GuardrailTopic.Builder

  • Method Details

    • name

      The name of the topic to deny.

      Parameters:
      name - The name of the topic to deny.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • definition

      GuardrailTopic.Builder definition(String definition)

      A definition of the topic to deny.

      Parameters:
      definition - A definition of the topic to deny.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • examples

      A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

      Parameters:
      examples - A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • examples

      GuardrailTopic.Builder examples(String... examples)

      A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

      Parameters:
      examples - A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • type

      Specifies to deny the topic.

      Parameters:
      type - Specifies to deny the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • type

      Specifies to deny the topic.

      Parameters:
      type - Specifies to deny the topic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      GuardrailTopic.Builder inputAction(String inputAction)

      The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      GuardrailTopic.Builder inputAction(GuardrailTopicAction inputAction)

      The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      GuardrailTopic.Builder outputAction(String outputAction)

      The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      GuardrailTopic.Builder outputAction(GuardrailTopicAction outputAction)

      The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputEnabled

      GuardrailTopic.Builder inputEnabled(Boolean inputEnabled)

      Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      inputEnabled - Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • outputEnabled

      GuardrailTopic.Builder outputEnabled(Boolean outputEnabled)

      Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      outputEnabled - Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.