Interface ProductionVariant.Builder
- All Superinterfaces:
- Buildable,- CopyableBuilder<ProductionVariant.Builder,,- ProductionVariant> - SdkBuilder<ProductionVariant.Builder,,- ProductionVariant> - SdkPojo
- Enclosing class:
- ProductionVariant
- 
Method SummaryModifier and TypeMethodDescriptionacceleratorType(String acceleratorType) This parameter is no longer supported.acceleratorType(ProductionVariantAcceleratorType acceleratorType) This parameter is no longer supported.default ProductionVariant.BuildercapacityReservationConfig(Consumer<ProductionVariantCapacityReservationConfig.Builder> capacityReservationConfig) Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint.capacityReservationConfig(ProductionVariantCapacityReservationConfig capacityReservationConfig) Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint.containerStartupHealthCheckTimeoutInSeconds(Integer containerStartupHealthCheckTimeoutInSeconds) The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting.default ProductionVariant.BuildercoreDumpConfig(Consumer<ProductionVariantCoreDumpConfig.Builder> coreDumpConfig) Specifies configuration for a core dump from the model container when the process crashes.coreDumpConfig(ProductionVariantCoreDumpConfig coreDumpConfig) Specifies configuration for a core dump from the model container when the process crashes.enableSSMAccess(Boolean enableSSMAccess) You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint.inferenceAmiVersion(String inferenceAmiVersion) Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images.inferenceAmiVersion(ProductionVariantInferenceAmiVersion inferenceAmiVersion) Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images.initialInstanceCount(Integer initialInstanceCount) Number of instances to launch initially.initialVariantWeight(Float initialVariantWeight) Determines initial traffic distribution among all of the models that you specify in the endpoint configuration.instanceType(String instanceType) The ML compute instance type.instanceType(ProductionVariantInstanceType instanceType) The ML compute instance type.default ProductionVariant.BuildermanagedInstanceScaling(Consumer<ProductionVariantManagedInstanceScaling.Builder> managedInstanceScaling) Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.managedInstanceScaling(ProductionVariantManagedInstanceScaling managedInstanceScaling) Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.modelDataDownloadTimeoutInSeconds(Integer modelDataDownloadTimeoutInSeconds) The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.The name of the model that you want to host.default ProductionVariant.BuilderroutingConfig(Consumer<ProductionVariantRoutingConfig.Builder> routingConfig) Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.routingConfig(ProductionVariantRoutingConfig routingConfig) Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.default ProductionVariant.BuilderserverlessConfig(Consumer<ProductionVariantServerlessConfig.Builder> serverlessConfig) The serverless configuration for an endpoint.serverlessConfig(ProductionVariantServerlessConfig serverlessConfig) The serverless configuration for an endpoint.variantName(String variantName) The name of the production variant.volumeSizeInGB(Integer volumeSizeInGB) The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuildercopyMethods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilderapplyMutation, buildMethods inherited from interface software.amazon.awssdk.core.SdkPojoequalsBySdkFields, sdkFieldNameToField, sdkFields
- 
Method Details- 
variantNameThe name of the production variant. - Parameters:
- variantName- The name of the production variant.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
modelNameThe name of the model that you want to host. This is the name that you specified when creating the model. - Parameters:
- modelName- The name of the model that you want to host. This is the name that you specified when creating the model.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
initialInstanceCountNumber of instances to launch initially. - Parameters:
- initialInstanceCount- Number of instances to launch initially.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
instanceTypeThe ML compute instance type. - Parameters:
- instanceType- The ML compute instance type.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
instanceTypeThe ML compute instance type. - Parameters:
- instanceType- The ML compute instance type.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
initialVariantWeightDetermines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeightto the sum of allVariantWeightvalues across all ProductionVariants. If unspecified, it defaults to 1.0.- Parameters:
- initialVariantWeight- Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the- VariantWeightto the sum of all- VariantWeightvalues across all ProductionVariants. If unspecified, it defaults to 1.0.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
acceleratorTypeThis parameter is no longer supported. Elastic Inference (EI) is no longer available. This parameter was used to specify the size of the EI instance to use for the production variant. - Parameters:
- acceleratorType- This parameter is no longer supported. Elastic Inference (EI) is no longer available.- This parameter was used to specify the size of the EI instance to use for the production variant. 
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
acceleratorTypeThis parameter is no longer supported. Elastic Inference (EI) is no longer available. This parameter was used to specify the size of the EI instance to use for the production variant. - Parameters:
- acceleratorType- This parameter is no longer supported. Elastic Inference (EI) is no longer available.- This parameter was used to specify the size of the EI instance to use for the production variant. 
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
coreDumpConfigSpecifies configuration for a core dump from the model container when the process crashes. - Parameters:
- coreDumpConfig- Specifies configuration for a core dump from the model container when the process crashes.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
coreDumpConfigdefault ProductionVariant.Builder coreDumpConfig(Consumer<ProductionVariantCoreDumpConfig.Builder> coreDumpConfig) Specifies configuration for a core dump from the model container when the process crashes. This is a convenience method that creates an instance of theProductionVariantCoreDumpConfig.Builderavoiding the need to create one manually viaProductionVariantCoreDumpConfig.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tocoreDumpConfig(ProductionVariantCoreDumpConfig).- Parameters:
- coreDumpConfig- a consumer that will call methods on- ProductionVariantCoreDumpConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
serverlessConfigThe serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration. - Parameters:
- serverlessConfig- The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
serverlessConfigdefault ProductionVariant.Builder serverlessConfig(Consumer<ProductionVariantServerlessConfig.Builder> serverlessConfig) The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration. This is a convenience method that creates an instance of theProductionVariantServerlessConfig.Builderavoiding the need to create one manually viaProductionVariantServerlessConfig.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed toserverlessConfig(ProductionVariantServerlessConfig).- Parameters:
- serverlessConfig- a consumer that will call methods on- ProductionVariantServerlessConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
volumeSizeInGBThe size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported. - Parameters:
- volumeSizeInGB- The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
modelDataDownloadTimeoutInSecondsProductionVariant.Builder modelDataDownloadTimeoutInSeconds(Integer modelDataDownloadTimeoutInSeconds) The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant. - Parameters:
- modelDataDownloadTimeoutInSeconds- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
containerStartupHealthCheckTimeoutInSecondsProductionVariant.Builder containerStartupHealthCheckTimeoutInSeconds(Integer containerStartupHealthCheckTimeoutInSeconds) The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests. - Parameters:
- containerStartupHealthCheckTimeoutInSeconds- The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
enableSSMAccessYou can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling UpdateEndpoint.- Parameters:
- enableSSMAccess- You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling- UpdateEndpoint.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
managedInstanceScalingProductionVariant.Builder managedInstanceScaling(ProductionVariantManagedInstanceScaling managedInstanceScaling) Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic. - Parameters:
- managedInstanceScaling- Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
managedInstanceScalingdefault ProductionVariant.Builder managedInstanceScaling(Consumer<ProductionVariantManagedInstanceScaling.Builder> managedInstanceScaling) Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic. This is a convenience method that creates an instance of theProductionVariantManagedInstanceScaling.Builderavoiding the need to create one manually viaProductionVariantManagedInstanceScaling.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tomanagedInstanceScaling(ProductionVariantManagedInstanceScaling).- Parameters:
- managedInstanceScaling- a consumer that will call methods on- ProductionVariantManagedInstanceScaling.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
routingConfigSettings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts. - Parameters:
- routingConfig- Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
routingConfigdefault ProductionVariant.Builder routingConfig(Consumer<ProductionVariantRoutingConfig.Builder> routingConfig) Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts. This is a convenience method that creates an instance of theProductionVariantRoutingConfig.Builderavoiding the need to create one manually viaProductionVariantRoutingConfig.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed toroutingConfig(ProductionVariantRoutingConfig).- Parameters:
- routingConfig- a consumer that will call methods on- ProductionVariantRoutingConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
inferenceAmiVersionSpecifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads. By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions. The AMI version names, and their configurations, are the following: - al2-ami-sagemaker-inference-gpu-2
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 535 
- 
 CUDA version: 12.2 
 
- 
 
- al2-ami-sagemaker-inference-gpu-2-1
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 535 
- 
 CUDA version: 12.2 
- 
 NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
 
- al2-ami-sagemaker-inference-gpu-3-1
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 550 
- 
 CUDA version: 12.4 
- 
 NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
 
- al2-ami-sagemaker-inference-neuron-2
- 
 - 
 Accelerator: Inferentia2 and Trainium 
- 
 Neuron driver version: 2.19 
 
- 
 
 - Parameters:
- inferenceAmiVersion- Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.- By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions. - The AMI version names, and their configurations, are the following: - al2-ami-sagemaker-inference-gpu-2
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 535 
- 
        CUDA version: 12.2 
 
- 
        
- al2-ami-sagemaker-inference-gpu-2-1
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 535 
- 
        CUDA version: 12.2 
- 
        NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
        
- al2-ami-sagemaker-inference-gpu-3-1
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 550 
- 
        CUDA version: 12.4 
- 
        NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
        
- al2-ami-sagemaker-inference-neuron-2
- 
        - 
        Accelerator: Inferentia2 and Trainium 
- 
        Neuron driver version: 2.19 
 
- 
        
 
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
inferenceAmiVersionProductionVariant.Builder inferenceAmiVersion(ProductionVariantInferenceAmiVersion inferenceAmiVersion) Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads. By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions. The AMI version names, and their configurations, are the following: - al2-ami-sagemaker-inference-gpu-2
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 535 
- 
 CUDA version: 12.2 
 
- 
 
- al2-ami-sagemaker-inference-gpu-2-1
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 535 
- 
 CUDA version: 12.2 
- 
 NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
 
- al2-ami-sagemaker-inference-gpu-3-1
- 
 - 
 Accelerator: GPU 
- 
 NVIDIA driver version: 550 
- 
 CUDA version: 12.4 
- 
 NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
 
- al2-ami-sagemaker-inference-neuron-2
- 
 - 
 Accelerator: Inferentia2 and Trainium 
- 
 Neuron driver version: 2.19 
 
- 
 
 - Parameters:
- inferenceAmiVersion- Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.- By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions. - The AMI version names, and their configurations, are the following: - al2-ami-sagemaker-inference-gpu-2
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 535 
- 
        CUDA version: 12.2 
 
- 
        
- al2-ami-sagemaker-inference-gpu-2-1
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 535 
- 
        CUDA version: 12.2 
- 
        NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
        
- al2-ami-sagemaker-inference-gpu-3-1
- 
        - 
        Accelerator: GPU 
- 
        NVIDIA driver version: 550 
- 
        CUDA version: 12.4 
- 
        NVIDIA Container Toolkit with disabled CUDA-compat mounting 
 
- 
        
- al2-ami-sagemaker-inference-neuron-2
- 
        - 
        Accelerator: Inferentia2 and Trainium 
- 
        Neuron driver version: 2.19 
 
- 
        
 
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
capacityReservationConfigProductionVariant.Builder capacityReservationConfig(ProductionVariantCapacityReservationConfig capacityReservationConfig) Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint. - Parameters:
- capacityReservationConfig- Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
capacityReservationConfigdefault ProductionVariant.Builder capacityReservationConfig(Consumer<ProductionVariantCapacityReservationConfig.Builder> capacityReservationConfig) Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint. This is a convenience method that creates an instance of theProductionVariantCapacityReservationConfig.Builderavoiding the need to create one manually viaProductionVariantCapacityReservationConfig.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tocapacityReservationConfig(ProductionVariantCapacityReservationConfig).- Parameters:
- capacityReservationConfig- a consumer that will call methods on- ProductionVariantCapacityReservationConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
 
-