Class TransformInput

java.lang.Object
software.amazon.awssdk.services.sagemaker.model.TransformInput
All Implemented Interfaces:
Serializable, SdkPojo, ToCopyableBuilder<TransformInput.Builder,TransformInput>

@Generated("software.amazon.awssdk:codegen") public final class TransformInput extends Object implements SdkPojo, Serializable, ToCopyableBuilder<TransformInput.Builder,TransformInput>

Describes the input source of a transform job and the way the transform job consumes it.

See Also:
  • Method Details

    • dataSource

      public final TransformDataSource dataSource()

      Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

      Returns:
      Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
    • contentType

      public final String contentType()

      The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

      Returns:
      The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
    • compressionType

      public final CompressionType compressionType()

      If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

      If the service returns an enum value that is not available in the current SDK version, compressionType will return CompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from compressionTypeAsString().

      Returns:
      If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.
      See Also:
    • compressionTypeAsString

      public final String compressionTypeAsString()

      If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

      If the service returns an enum value that is not available in the current SDK version, compressionType will return CompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from compressionTypeAsString().

      Returns:
      If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.
      See Also:
    • splitType

      public final SplitType splitType()

      The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

      • RecordIO

      • TFRecord

      When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

      Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

      For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

      If the service returns an enum value that is not available in the current SDK version, splitType will return SplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from splitTypeAsString().

      Returns:
      The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

      • RecordIO

      • TFRecord

      When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

      Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

      For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

      See Also:
    • splitTypeAsString

      public final String splitTypeAsString()

      The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

      • RecordIO

      • TFRecord

      When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

      Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

      For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

      If the service returns an enum value that is not available in the current SDK version, splitType will return SplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from splitTypeAsString().

      Returns:
      The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

      • RecordIO

      • TFRecord

      When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

      Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

      For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

      See Also:
    • toBuilder

      public TransformInput.Builder toBuilder()
      Description copied from interface: ToCopyableBuilder
      Take this object and create a builder that contains all of the current property values of this object.
      Specified by:
      toBuilder in interface ToCopyableBuilder<TransformInput.Builder,TransformInput>
      Returns:
      a builder for type T
    • builder

      public static TransformInput.Builder builder()
    • serializableBuilderClass

      public static Class<? extends TransformInput.Builder> serializableBuilderClass()
    • hashCode

      public final int hashCode()
      Overrides:
      hashCode in class Object
    • equals

      public final boolean equals(Object obj)
      Overrides:
      equals in class Object
    • equalsBySdkFields

      public final boolean equalsBySdkFields(Object obj)
      Description copied from interface: SdkPojo
      Indicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in an SdkPojo class, and is generated based on a service model.

      If an SdkPojo class does not have any inherited fields, equalsBySdkFields and equals are essentially the same.

      Specified by:
      equalsBySdkFields in interface SdkPojo
      Parameters:
      obj - the object to be compared with
      Returns:
      true if the other object equals to this object by sdk fields, false otherwise.
    • toString

      public final String toString()
      Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
      Overrides:
      toString in class Object
    • getValueForField

      public final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz)
    • sdkFields

      public final List<SdkField<?>> sdkFields()
      Specified by:
      sdkFields in interface SdkPojo
      Returns:
      List of SdkField in this POJO. May be empty list but should never be null.