Interface ProductionVariant.Builder

  • Method Details

    • variantName

      ProductionVariant.Builder variantName(String variantName)

      The name of the production variant.

      Parameters:
      variantName - The name of the production variant.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • modelName

      ProductionVariant.Builder modelName(String modelName)

      The name of the model that you want to host. This is the name that you specified when creating the model.

      Parameters:
      modelName - The name of the model that you want to host. This is the name that you specified when creating the model.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • initialInstanceCount

      ProductionVariant.Builder initialInstanceCount(Integer initialInstanceCount)

      Number of instances to launch initially.

      Parameters:
      initialInstanceCount - Number of instances to launch initially.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • instanceType

      ProductionVariant.Builder instanceType(String instanceType)

      The ML compute instance type.

      Parameters:
      instanceType - The ML compute instance type.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • instanceType

      The ML compute instance type.

      Parameters:
      instanceType - The ML compute instance type.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • initialVariantWeight

      ProductionVariant.Builder initialVariantWeight(Float initialVariantWeight)

      Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeight to the sum of all VariantWeight values across all ProductionVariants. If unspecified, it defaults to 1.0.

      Parameters:
      initialVariantWeight - Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeight to the sum of all VariantWeight values across all ProductionVariants. If unspecified, it defaults to 1.0.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • acceleratorType

      ProductionVariant.Builder acceleratorType(String acceleratorType)

      The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.

      Parameters:
      acceleratorType - The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • acceleratorType

      The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.

      Parameters:
      acceleratorType - The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • coreDumpConfig

      Specifies configuration for a core dump from the model container when the process crashes.

      Parameters:
      coreDumpConfig - Specifies configuration for a core dump from the model container when the process crashes.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • coreDumpConfig

      Specifies configuration for a core dump from the model container when the process crashes.

      This is a convenience method that creates an instance of the ProductionVariantCoreDumpConfig.Builder avoiding the need to create one manually via ProductionVariantCoreDumpConfig.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to coreDumpConfig(ProductionVariantCoreDumpConfig).

      Parameters:
      coreDumpConfig - a consumer that will call methods on ProductionVariantCoreDumpConfig.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • serverlessConfig

      The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.

      Parameters:
      serverlessConfig - The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • serverlessConfig

      The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.

      This is a convenience method that creates an instance of the ProductionVariantServerlessConfig.Builder avoiding the need to create one manually via ProductionVariantServerlessConfig.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to serverlessConfig(ProductionVariantServerlessConfig).

      Parameters:
      serverlessConfig - a consumer that will call methods on ProductionVariantServerlessConfig.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • volumeSizeInGB

      ProductionVariant.Builder volumeSizeInGB(Integer volumeSizeInGB)

      The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.

      Parameters:
      volumeSizeInGB - The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • modelDataDownloadTimeoutInSeconds

      ProductionVariant.Builder modelDataDownloadTimeoutInSeconds(Integer modelDataDownloadTimeoutInSeconds)

      The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.

      Parameters:
      modelDataDownloadTimeoutInSeconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • containerStartupHealthCheckTimeoutInSeconds

      ProductionVariant.Builder containerStartupHealthCheckTimeoutInSeconds(Integer containerStartupHealthCheckTimeoutInSeconds)

      The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.

      Parameters:
      containerStartupHealthCheckTimeoutInSeconds - The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • enableSSMAccess

      ProductionVariant.Builder enableSSMAccess(Boolean enableSSMAccess)

      You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling UpdateEndpoint.

      Parameters:
      enableSSMAccess - You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling UpdateEndpoint.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • managedInstanceScaling

      ProductionVariant.Builder managedInstanceScaling(ProductionVariantManagedInstanceScaling managedInstanceScaling)

      Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.

      Parameters:
      managedInstanceScaling - Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • managedInstanceScaling

      default ProductionVariant.Builder managedInstanceScaling(Consumer<ProductionVariantManagedInstanceScaling.Builder> managedInstanceScaling)

      Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.

      This is a convenience method that creates an instance of the ProductionVariantManagedInstanceScaling.Builder avoiding the need to create one manually via ProductionVariantManagedInstanceScaling.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to managedInstanceScaling(ProductionVariantManagedInstanceScaling).

      Parameters:
      managedInstanceScaling - a consumer that will call methods on ProductionVariantManagedInstanceScaling.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • routingConfig

      Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.

      Parameters:
      routingConfig - Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • routingConfig

      Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.

      This is a convenience method that creates an instance of the ProductionVariantRoutingConfig.Builder avoiding the need to create one manually via ProductionVariantRoutingConfig.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to routingConfig(ProductionVariantRoutingConfig).

      Parameters:
      routingConfig - a consumer that will call methods on ProductionVariantRoutingConfig.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inferenceAmiVersion

      ProductionVariant.Builder inferenceAmiVersion(String inferenceAmiVersion)

      Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

      By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

      The AMI version names, and their configurations, are the following:

      al2-ami-sagemaker-inference-gpu-2
      • Accelerator: GPU

      • NVIDIA driver version: 535.54.03

      • CUDA driver version: 12.2

      • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

      Parameters:
      inferenceAmiVersion - Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

      By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

      The AMI version names, and their configurations, are the following:

      al2-ami-sagemaker-inference-gpu-2
      • Accelerator: GPU

      • NVIDIA driver version: 535.54.03

      • CUDA driver version: 12.2

      • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • inferenceAmiVersion

      ProductionVariant.Builder inferenceAmiVersion(ProductionVariantInferenceAmiVersion inferenceAmiVersion)

      Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

      By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

      The AMI version names, and their configurations, are the following:

      al2-ami-sagemaker-inference-gpu-2
      • Accelerator: GPU

      • NVIDIA driver version: 535.54.03

      • CUDA driver version: 12.2

      • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

      Parameters:
      inferenceAmiVersion - Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

      By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

      The AMI version names, and their configurations, are the following:

      al2-ami-sagemaker-inference-gpu-2
      • Accelerator: GPU

      • NVIDIA driver version: 535.54.03

      • CUDA driver version: 12.2

      • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also: