Interface TextInferenceConfig.Builder

  • Method Details

    • maxTokens

      TextInferenceConfig.Builder maxTokens(Integer maxTokens)

      The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.

      Parameters:
      maxTokens - The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • stopSequences

      TextInferenceConfig.Builder stopSequences(Collection<String> stopSequences)

      A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.

      Parameters:
      stopSequences - A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • stopSequences

      TextInferenceConfig.Builder stopSequences(String... stopSequences)

      A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.

      Parameters:
      stopSequences - A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • temperature

      TextInferenceConfig.Builder temperature(Float temperature)

      Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.

      Parameters:
      temperature - Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • topP

      A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.

      Parameters:
      topP - A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.
      Returns:
      Returns a reference to this object so that method calls can be chained together.