Class TransformInput
- All Implemented Interfaces:
Serializable,SdkPojo,ToCopyableBuilder<TransformInput.Builder,TransformInput>
Describes the input source of a transform job and the way the transform job consumes it.
- See Also:
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionstatic TransformInput.Builderbuilder()final CompressionTypeIf your transform data is compressed, specify the compression type.final StringIf your transform data is compressed, specify the compression type.final StringThe multipurpose internet mail extension (MIME) type of the data.final TransformDataSourceDescribes the location of the channel data, which is, the S3 location of the input data that the model can consume.final booleanfinal booleanequalsBySdkFields(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz) final inthashCode()static Class<? extends TransformInput.Builder> final SplitTypeThe method to use to split the transform job's data files into smaller batches.final StringThe method to use to split the transform job's data files into smaller batches.Take this object and create a builder that contains all of the current property values of this object.final StringtoString()Returns a string representation of this object.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
dataSource
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
- Returns:
- Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
-
contentType
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
- Returns:
- The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
-
compressionType
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.If the service returns an enum value that is not available in the current SDK version,
compressionTypewill returnCompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionTypeAsString().- Returns:
- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is
None. - See Also:
-
compressionTypeAsString
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.If the service returns an enum value that is not available in the current SDK version,
compressionTypewill returnCompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionTypeAsString().- Returns:
- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is
None. - See Also:
-
splitType
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.If the service returns an enum value that is not available in the current SDK version,
splitTypewill returnSplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsplitTypeAsString().- Returns:
- The method to use to split the transform job's data files into smaller batches. Splitting is necessary
when the total size of each object is too large to fit in a single request. You can also use data
splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation. -
- See Also:
-
-
splitTypeAsString
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.If the service returns an enum value that is not available in the current SDK version,
splitTypewill returnSplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsplitTypeAsString().- Returns:
- The method to use to split the transform job's data files into smaller batches. Splitting is necessary
when the total size of each object is too large to fit in a single request. You can also use data
splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation. -
- See Also:
-
-
toBuilder
Description copied from interface:ToCopyableBuilderTake this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilderin interfaceToCopyableBuilder<TransformInput.Builder,TransformInput> - Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
-
equals
-
equalsBySdkFields
Description copied from interface:SdkPojoIndicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojoclass, and is generated based on a service model.If an
SdkPojoclass does not have any inherited fields,equalsBySdkFieldsandequalsare essentially the same.- Specified by:
equalsBySdkFieldsin interfaceSdkPojo- Parameters:
obj- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
-
getValueForField
-
sdkFields
-