Interface GuardrailPiiEntity.Builder

  • Method Details

    • type

      The type of PII entity. For example, Social Security Number.

      Parameters:
      type - The type of PII entity. For example, Social Security Number.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • type

      The type of PII entity. For example, Social Security Number.

      Parameters:
      type - The type of PII entity. For example, Social Security Number.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • action

      The configured guardrail action when PII entity is detected.

      Parameters:
      action - The configured guardrail action when PII entity is detected.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • action

      The configured guardrail action when PII entity is detected.

      Parameters:
      action - The configured guardrail action when PII entity is detected.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      GuardrailPiiEntity.Builder inputAction(String inputAction)

      The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputAction

      The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      inputAction - The action to take when harmful content is detected in the input. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      GuardrailPiiEntity.Builder outputAction(String outputAction)

      The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • outputAction

      The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Parameters:
      outputAction - The action to take when harmful content is detected in the output. Supported values include:

      • BLOCK – Block the content and replace it with blocked messaging.

      • ANONYMIZE – Mask the content and replace it with identifier tags.

      • NONE – Take no action but return detection information in the trace response.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inputEnabled

      GuardrailPiiEntity.Builder inputEnabled(Boolean inputEnabled)

      Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      inputEnabled - Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • outputEnabled

      GuardrailPiiEntity.Builder outputEnabled(Boolean outputEnabled)

      Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

      Parameters:
      outputEnabled - Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.