Interface TransformInput.Builder
- All Superinterfaces:
Buildable,CopyableBuilder<TransformInput.Builder,,TransformInput> SdkBuilder<TransformInput.Builder,,TransformInput> SdkPojo
- Enclosing class:
TransformInput
-
Method Summary
Modifier and TypeMethodDescriptioncompressionType(String compressionType) If your transform data is compressed, specify the compression type.compressionType(CompressionType compressionType) If your transform data is compressed, specify the compression type.contentType(String contentType) The multipurpose internet mail extension (MIME) type of the data.default TransformInput.BuilderdataSource(Consumer<TransformDataSource.Builder> dataSource) Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.dataSource(TransformDataSource dataSource) Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.The method to use to split the transform job's data files into smaller batches.The method to use to split the transform job's data files into smaller batches.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copyMethods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, buildMethods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
Method Details
-
dataSource
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
- Parameters:
dataSource- Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
This is a convenience method that creates an instance of theTransformDataSource.Builderavoiding the need to create one manually viaTransformDataSource.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed todataSource(TransformDataSource).- Parameters:
dataSource- a consumer that will call methods onTransformDataSource.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
contentType
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
- Parameters:
contentType- The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compressionType
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.- Parameters:
compressionType- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value isNone.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
compressionType
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.- Parameters:
compressionType- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value isNone.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
splitType
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.- Parameters:
splitType- The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value forSplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
-
splitType
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.- Parameters:
splitType- The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value forSplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
-