Interface PromptModelInferenceConfiguration.Builder

  • Method Details

    • maxTokens

      The maximum number of tokens to return in the response.

      Parameters:
      maxTokens - The maximum number of tokens to return in the response.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • stopSequences

      A list of strings that define sequences after which the model will stop generating.

      Parameters:
      stopSequences - A list of strings that define sequences after which the model will stop generating.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • stopSequences

      PromptModelInferenceConfiguration.Builder stopSequences(String... stopSequences)

      A list of strings that define sequences after which the model will stop generating.

      Parameters:
      stopSequences - A list of strings that define sequences after which the model will stop generating.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • temperature

      Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

      Parameters:
      temperature - Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • topP

      The percentage of most-likely candidates that the model considers for the next token.

      Parameters:
      topP - The percentage of most-likely candidates that the model considers for the next token.
      Returns:
      Returns a reference to this object so that method calls can be chained together.