Interface TextAIPromptInferenceConfiguration.Builder

  • Method Details

    • temperature

      The temperature setting for controlling randomness in the generated response.

      Parameters:
      temperature - The temperature setting for controlling randomness in the generated response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • topP

      The top-P sampling parameter for nucleus sampling.

      Parameters:
      topP - The top-P sampling parameter for nucleus sampling.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • topK

      The top-K sampling parameter for token selection.

      Parameters:
      topK - The top-K sampling parameter for token selection.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • maxTokensToSample

      TextAIPromptInferenceConfiguration.Builder maxTokensToSample(Integer maxTokensToSample)

      The maximum number of tokens to generate in the response.

      Parameters:
      maxTokensToSample - The maximum number of tokens to generate in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.