Interface Channel.Builder
- All Superinterfaces:
Buildable,CopyableBuilder<Channel.Builder,,Channel> SdkBuilder<Channel.Builder,,Channel> SdkPojo
- Enclosing class:
Channel
-
Method Summary
Modifier and TypeMethodDescriptionchannelName(String channelName) The name of the channel.compressionType(String compressionType) If training data is compressed, the compression type.compressionType(CompressionType compressionType) If training data is compressed, the compression type.contentType(String contentType) The MIME type of the data.default Channel.BuilderdataSource(Consumer<DataSource.Builder> dataSource) The location of the channel data.dataSource(DataSource dataSource) The location of the channel data.(Optional) The input mode to use for the data channel in a training job.inputMode(TrainingInputMode inputMode) (Optional) The input mode to use for the data channel in a training job.recordWrapperType(String recordWrapperType) recordWrapperType(RecordWrapper recordWrapperType) default Channel.BuildershuffleConfig(Consumer<ShuffleConfig.Builder> shuffleConfig) A configuration for a shuffle option for input data in a channel.shuffleConfig(ShuffleConfig shuffleConfig) A configuration for a shuffle option for input data in a channel.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copyMethods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, buildMethods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFieldNameToField, sdkFields
-
Method Details
-
channelName
The name of the channel.
- Parameters:
channelName- The name of the channel.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
The location of the channel data.
- Parameters:
dataSource- The location of the channel data.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
The location of the channel data.
This is a convenience method that creates an instance of theDataSource.Builderavoiding the need to create one manually viaDataSource.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed todataSource(DataSource).- Parameters:
dataSource- a consumer that will call methods onDataSource.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
contentType
The MIME type of the data.
- Parameters:
contentType- The MIME type of the data.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compressionType
If training data is compressed, the compression type. The default value is
None.CompressionTypeis used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Parameters:
compressionType- If training data is compressed, the compression type. The default value isNone.CompressionTypeis used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
compressionType
If training data is compressed, the compression type. The default value is
None.CompressionTypeis used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Parameters:
compressionType- If training data is compressed, the compression type. The default value isNone.CompressionTypeis used only in Pipe input mode. In File mode, leave this field unset or set it to None.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
recordWrapperType
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Parameters:
recordWrapperType-Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
recordWrapperType
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Parameters:
recordWrapperType-Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
inputMode
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set forTrainingInputMode. Use this parameter to override theTrainingInputModesetting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFileinput mode. To stream data directly from Amazon S3 to the container, choosePipeinput mode.To use a model for incremental training, choose
Fileinput model.- Parameters:
inputMode- (Optional) The input mode to use for the data channel in a training job. If you don't set a value forInputMode, SageMaker uses the value set forTrainingInputMode. Use this parameter to override theTrainingInputModesetting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFileinput mode. To stream data directly from Amazon S3 to the container, choosePipeinput mode.To use a model for incremental training, choose
Fileinput model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
inputMode
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set forTrainingInputMode. Use this parameter to override theTrainingInputModesetting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFileinput mode. To stream data directly from Amazon S3 to the container, choosePipeinput mode.To use a model for incremental training, choose
Fileinput model.- Parameters:
inputMode- (Optional) The input mode to use for the data channel in a training job. If you don't set a value forInputMode, SageMaker uses the value set forTrainingInputMode. Use this parameter to override theTrainingInputModesetting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, useFileinput mode. To stream data directly from Amazon S3 to the container, choosePipeinput mode.To use a model for incremental training, choose
Fileinput model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
shuffleConfig
A configuration for a shuffle option for input data in a channel. If you use
S3PrefixforS3DataType, this shuffles the results of the S3 key prefix matches. If you useManifestFile, the order of the S3 object references in theManifestFileis shuffled. If you useAugmentedManifestFile, the order of the JSON lines in theAugmentedManifestFileis shuffled. The shuffling order is determined using theSeedvalue.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionTypeofShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.- Parameters:
shuffleConfig- A configuration for a shuffle option for input data in a channel. If you useS3PrefixforS3DataType, this shuffles the results of the S3 key prefix matches. If you useManifestFile, the order of the S3 object references in theManifestFileis shuffled. If you useAugmentedManifestFile, the order of the JSON lines in theAugmentedManifestFileis shuffled. The shuffling order is determined using theSeedvalue.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionTypeofShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
shuffleConfig
A configuration for a shuffle option for input data in a channel. If you use
S3PrefixforS3DataType, this shuffles the results of the S3 key prefix matches. If you useManifestFile, the order of the S3 object references in theManifestFileis shuffled. If you useAugmentedManifestFile, the order of the JSON lines in theAugmentedManifestFileis shuffled. The shuffling order is determined using theSeedvalue.For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with
This is a convenience method that creates an instance of theS3DataDistributionTypeofShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.ShuffleConfig.Builderavoiding the need to create one manually viaShuffleConfig.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed toshuffleConfig(ShuffleConfig).- Parameters:
shuffleConfig- a consumer that will call methods onShuffleConfig.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-