public static interface Channel.Builder extends SdkPojo, CopyableBuilder<Channel.Builder,Channel>
Modifier and Type | Method and Description |
---|---|
Channel.Builder |
channelName(String channelName)
The name of the channel.
|
Channel.Builder |
compressionType(CompressionType compressionType)
If training data is compressed, the compression type.
|
Channel.Builder |
compressionType(String compressionType)
If training data is compressed, the compression type.
|
Channel.Builder |
contentType(String contentType)
The MIME type of the data.
|
default Channel.Builder |
dataSource(Consumer<DataSource.Builder> dataSource)
The location of the channel data.
|
Channel.Builder |
dataSource(DataSource dataSource)
The location of the channel data.
|
Channel.Builder |
inputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job.
|
Channel.Builder |
inputMode(TrainingInputMode inputMode)
(Optional) The input mode to use for the data channel in a training job.
|
Channel.Builder |
recordWrapperType(RecordWrapper recordWrapperType)
|
Channel.Builder |
recordWrapperType(String recordWrapperType)
|
default Channel.Builder |
shuffleConfig(Consumer<ShuffleConfig.Builder> shuffleConfig)
A configuration for a shuffle option for input data in a channel.
|
Channel.Builder |
shuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel.
|
copy
applyMutation, build
Channel.Builder channelName(String channelName)
The name of the channel.
channelName
- The name of the channel.Channel.Builder dataSource(DataSource dataSource)
The location of the channel data.
dataSource
- The location of the channel data.default Channel.Builder dataSource(Consumer<DataSource.Builder> dataSource)
The location of the channel data.
This is a convenience that creates an instance of theDataSource.Builder
avoiding the need to create
one manually via DataSource.builder()
.
When the Consumer
completes, SdkBuilder.build()
is called immediately and its result
is passed to dataSource(DataSource)
.dataSource
- a consumer that will call methods on DataSource.Builder
dataSource(DataSource)
Channel.Builder contentType(String contentType)
The MIME type of the data.
contentType
- The MIME type of the data.Channel.Builder compressionType(String compressionType)
If training data is compressed, the compression type. The default value is None
.
CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it
to None.
compressionType
- If training data is compressed, the compression type. The default value is None
.
CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or
set it to None.CompressionType
,
CompressionType
Channel.Builder compressionType(CompressionType compressionType)
If training data is compressed, the compression type. The default value is None
.
CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or set it
to None.
compressionType
- If training data is compressed, the compression type. The default value is None
.
CompressionType
is used only in Pipe input mode. In File mode, leave this field unset or
set it to None.CompressionType
,
CompressionType
Channel.Builder recordWrapperType(String recordWrapperType)
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
recordWrapperType
- Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapper
,
RecordWrapper
Channel.Builder recordWrapperType(RecordWrapper recordWrapperType)
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
recordWrapperType
- Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapper
,
RecordWrapper
Channel.Builder inputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, Amazon SageMaker uses the value set for TrainingInputMode
. Use this
parameter to override the TrainingInputMode
setting in a AlgorithmSpecification request
when you have a channel that needs a different input mode from the training job's general setting. To
download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and
mount the directory to a Docker volume, use File
input mode. To stream data directly from Amazon
S3 to the container, choose Pipe
input mode.
To use a model for incremental training, choose File
input model.
inputMode
- (Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, Amazon SageMaker uses the value set for TrainingInputMode
. Use
this parameter to override the TrainingInputMode
setting in a
AlgorithmSpecification request when you have a channel that needs a different input mode from
the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon
S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, use
File
input mode. To stream data directly from Amazon S3 to the container, choose
Pipe
input mode.
To use a model for incremental training, choose File
input model.
TrainingInputMode
,
TrainingInputMode
Channel.Builder inputMode(TrainingInputMode inputMode)
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, Amazon SageMaker uses the value set for TrainingInputMode
. Use this
parameter to override the TrainingInputMode
setting in a AlgorithmSpecification request
when you have a channel that needs a different input mode from the training job's general setting. To
download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and
mount the directory to a Docker volume, use File
input mode. To stream data directly from Amazon
S3 to the container, choose Pipe
input mode.
To use a model for incremental training, choose File
input model.
inputMode
- (Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode
, Amazon SageMaker uses the value set for TrainingInputMode
. Use
this parameter to override the TrainingInputMode
setting in a
AlgorithmSpecification request when you have a channel that needs a different input mode from
the training job's general setting. To download the data from Amazon Simple Storage Service (Amazon
S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, use
File
input mode. To stream data directly from Amazon S3 to the container, choose
Pipe
input mode.
To use a model for incremental training, choose File
input model.
TrainingInputMode
,
TrainingInputMode
Channel.Builder shuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for
S3DataType
, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile
, the order of the S3 object references in the ManifestFile
is
shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the
AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the
order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a
multi-node training job when ShuffleConfig is combined with S3DataDistributionType
of
ShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node
on the first epoch might be sent to a different node on the second epoch.
shuffleConfig
- A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for
S3DataType
, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile
, the order of the S3 object references in the ManifestFile
is
shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the
AugmentedManifestFile
is shuffled. The shuffling order is determined using the
Seed
value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures
that the order of the training data is different for each epoch, it helps reduce bias and possible
overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionType
of ShardedByS3Key
, the data is shuffled across nodes
so that the content sent to a particular node on the first epoch might be sent to a different node on
the second epoch.
default Channel.Builder shuffleConfig(Consumer<ShuffleConfig.Builder> shuffleConfig)
A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for
S3DataType
, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile
, the order of the S3 object references in the ManifestFile
is
shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the
AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the
order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a
multi-node training job when ShuffleConfig is combined with S3DataDistributionType
of
ShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node
on the first epoch might be sent to a different node on the second epoch.
ShuffleConfig.Builder
avoiding the need to
create one manually via ShuffleConfig.builder()
.
When the Consumer
completes, SdkBuilder.build()
is called immediately and its
result is passed to shuffleConfig(ShuffleConfig)
.shuffleConfig
- a consumer that will call methods on ShuffleConfig.Builder
shuffleConfig(ShuffleConfig)
Copyright © 2017 Amazon Web Services, Inc. All Rights Reserved.