AWS SDK for C++  1.9.103
AWS SDK for C++
Public Member Functions | List of all members
Aws::SageMaker::Model::TransformInput Class Reference

#include <TransformInput.h>

Public Member Functions

 TransformInput ()
 
 TransformInput (Aws::Utils::Json::JsonView jsonValue)
 
TransformInputoperator= (Aws::Utils::Json::JsonView jsonValue)
 
Aws::Utils::Json::JsonValue Jsonize () const
 
const TransformDataSourceGetDataSource () const
 
bool DataSourceHasBeenSet () const
 
void SetDataSource (const TransformDataSource &value)
 
void SetDataSource (TransformDataSource &&value)
 
TransformInputWithDataSource (const TransformDataSource &value)
 
TransformInputWithDataSource (TransformDataSource &&value)
 
const Aws::StringGetContentType () const
 
bool ContentTypeHasBeenSet () const
 
void SetContentType (const Aws::String &value)
 
void SetContentType (Aws::String &&value)
 
void SetContentType (const char *value)
 
TransformInputWithContentType (const Aws::String &value)
 
TransformInputWithContentType (Aws::String &&value)
 
TransformInputWithContentType (const char *value)
 
const CompressionTypeGetCompressionType () const
 
bool CompressionTypeHasBeenSet () const
 
void SetCompressionType (const CompressionType &value)
 
void SetCompressionType (CompressionType &&value)
 
TransformInputWithCompressionType (const CompressionType &value)
 
TransformInputWithCompressionType (CompressionType &&value)
 
const SplitTypeGetSplitType () const
 
bool SplitTypeHasBeenSet () const
 
void SetSplitType (const SplitType &value)
 
void SetSplitType (SplitType &&value)
 
TransformInputWithSplitType (const SplitType &value)
 
TransformInputWithSplitType (SplitType &&value)
 

Detailed Description

Describes the input source of a transform job and the way the transform job consumes it.

See Also:

AWS API Reference

Definition at line 35 of file TransformInput.h.

Constructor & Destructor Documentation

◆ TransformInput() [1/2]

Aws::SageMaker::Model::TransformInput::TransformInput ( )

◆ TransformInput() [2/2]

Aws::SageMaker::Model::TransformInput::TransformInput ( Aws::Utils::Json::JsonView  jsonValue)

Member Function Documentation

◆ CompressionTypeHasBeenSet()

bool Aws::SageMaker::Model::TransformInput::CompressionTypeHasBeenSet ( ) const
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 150 of file TransformInput.h.

◆ ContentTypeHasBeenSet()

bool Aws::SageMaker::Model::TransformInput::ContentTypeHasBeenSet ( ) const
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 93 of file TransformInput.h.

◆ DataSourceHasBeenSet()

bool Aws::SageMaker::Model::TransformInput::DataSourceHasBeenSet ( ) const
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 54 of file TransformInput.h.

◆ GetCompressionType()

const CompressionType& Aws::SageMaker::Model::TransformInput::GetCompressionType ( ) const
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 143 of file TransformInput.h.

◆ GetContentType()

const Aws::String& Aws::SageMaker::Model::TransformInput::GetContentType ( ) const
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 86 of file TransformInput.h.

◆ GetDataSource()

const TransformDataSource& Aws::SageMaker::Model::TransformInput::GetDataSource ( ) const
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 48 of file TransformInput.h.

◆ GetSplitType()

const SplitType& Aws::SageMaker::Model::TransformInput::GetSplitType ( ) const
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 211 of file TransformInput.h.

◆ Jsonize()

Aws::Utils::Json::JsonValue Aws::SageMaker::Model::TransformInput::Jsonize ( ) const

◆ operator=()

TransformInput& Aws::SageMaker::Model::TransformInput::operator= ( Aws::Utils::Json::JsonView  jsonValue)

◆ SetCompressionType() [1/2]

void Aws::SageMaker::Model::TransformInput::SetCompressionType ( CompressionType &&  value)
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 164 of file TransformInput.h.

◆ SetCompressionType() [2/2]

void Aws::SageMaker::Model::TransformInput::SetCompressionType ( const CompressionType value)
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 157 of file TransformInput.h.

◆ SetContentType() [1/3]

void Aws::SageMaker::Model::TransformInput::SetContentType ( Aws::String &&  value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 107 of file TransformInput.h.

◆ SetContentType() [2/3]

void Aws::SageMaker::Model::TransformInput::SetContentType ( const Aws::String value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 100 of file TransformInput.h.

◆ SetContentType() [3/3]

void Aws::SageMaker::Model::TransformInput::SetContentType ( const char *  value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 114 of file TransformInput.h.

◆ SetDataSource() [1/2]

void Aws::SageMaker::Model::TransformInput::SetDataSource ( const TransformDataSource value)
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 60 of file TransformInput.h.

◆ SetDataSource() [2/2]

void Aws::SageMaker::Model::TransformInput::SetDataSource ( TransformDataSource &&  value)
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 66 of file TransformInput.h.

◆ SetSplitType() [1/2]

void Aws::SageMaker::Model::TransformInput::SetSplitType ( const SplitType value)
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 275 of file TransformInput.h.

◆ SetSplitType() [2/2]

void Aws::SageMaker::Model::TransformInput::SetSplitType ( SplitType &&  value)
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 307 of file TransformInput.h.

◆ SplitTypeHasBeenSet()

bool Aws::SageMaker::Model::TransformInput::SplitTypeHasBeenSet ( ) const
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 243 of file TransformInput.h.

◆ WithCompressionType() [1/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithCompressionType ( CompressionType &&  value)
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 178 of file TransformInput.h.

◆ WithCompressionType() [2/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithCompressionType ( const CompressionType value)
inline

If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None.

Definition at line 171 of file TransformInput.h.

◆ WithContentType() [1/3]

TransformInput& Aws::SageMaker::Model::TransformInput::WithContentType ( Aws::String &&  value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 128 of file TransformInput.h.

◆ WithContentType() [2/3]

TransformInput& Aws::SageMaker::Model::TransformInput::WithContentType ( const Aws::String value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 121 of file TransformInput.h.

◆ WithContentType() [3/3]

TransformInput& Aws::SageMaker::Model::TransformInput::WithContentType ( const char *  value)
inline

The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

Definition at line 135 of file TransformInput.h.

◆ WithDataSource() [1/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithDataSource ( const TransformDataSource value)
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 72 of file TransformInput.h.

◆ WithDataSource() [2/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithDataSource ( TransformDataSource &&  value)
inline

Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

Definition at line 78 of file TransformInput.h.

◆ WithSplitType() [1/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithSplitType ( const SplitType value)
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 339 of file TransformInput.h.

◆ WithSplitType() [2/2]

TransformInput& Aws::SageMaker::Model::TransformInput::WithSplitType ( SplitType &&  value)
inline

The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

  • RecordIO

  • TFRecord

When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord, Amazon SageMaker sends individual records in each request.

Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord. Padding is not removed if the value of BatchStrategy is set to MultiRecord.

For more information about RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

Definition at line 371 of file TransformInput.h.


The documentation for this class was generated from the following file: