AWS SDK for C++  1.9.123
AWS SDK for C++
Public Member Functions | List of all members
Aws::SageMaker::Model::InputConfig Class Reference

#include <InputConfig.h>

Public Member Functions

 InputConfig ()
 
 InputConfig (Aws::Utils::Json::JsonView jsonValue)
 
InputConfigoperator= (Aws::Utils::Json::JsonView jsonValue)
 
Aws::Utils::Json::JsonValue Jsonize () const
 
const Aws::StringGetS3Uri () const
 
bool S3UriHasBeenSet () const
 
void SetS3Uri (const Aws::String &value)
 
void SetS3Uri (Aws::String &&value)
 
void SetS3Uri (const char *value)
 
InputConfigWithS3Uri (const Aws::String &value)
 
InputConfigWithS3Uri (Aws::String &&value)
 
InputConfigWithS3Uri (const char *value)
 
const Aws::StringGetDataInputConfig () const
 
bool DataInputConfigHasBeenSet () const
 
void SetDataInputConfig (const Aws::String &value)
 
void SetDataInputConfig (Aws::String &&value)
 
void SetDataInputConfig (const char *value)
 
InputConfigWithDataInputConfig (const Aws::String &value)
 
InputConfigWithDataInputConfig (Aws::String &&value)
 
InputConfigWithDataInputConfig (const char *value)
 
const FrameworkGetFramework () const
 
bool FrameworkHasBeenSet () const
 
void SetFramework (const Framework &value)
 
void SetFramework (Framework &&value)
 
InputConfigWithFramework (const Framework &value)
 
InputConfigWithFramework (Framework &&value)
 
const Aws::StringGetFrameworkVersion () const
 
bool FrameworkVersionHasBeenSet () const
 
void SetFrameworkVersion (const Aws::String &value)
 
void SetFrameworkVersion (Aws::String &&value)
 
void SetFrameworkVersion (const char *value)
 
InputConfigWithFrameworkVersion (const Aws::String &value)
 
InputConfigWithFrameworkVersion (Aws::String &&value)
 
InputConfigWithFrameworkVersion (const char *value)
 

Detailed Description

Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

See Also:

AWS API Reference

Definition at line 34 of file InputConfig.h.

Constructor & Destructor Documentation

◆ InputConfig() [1/2]

Aws::SageMaker::Model::InputConfig::InputConfig ( )

◆ InputConfig() [2/2]

Aws::SageMaker::Model::InputConfig::InputConfig ( Aws::Utils::Json::JsonView  jsonValue)

Member Function Documentation

◆ DataInputConfigHasBeenSet()

bool Aws::SageMaker::Model::InputConfig::DataInputConfigHasBeenSet ( ) const
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 320 of file InputConfig.h.

◆ FrameworkHasBeenSet()

bool Aws::SageMaker::Model::InputConfig::FrameworkHasBeenSet ( ) const
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 999 of file InputConfig.h.

◆ FrameworkVersionHasBeenSet()

bool Aws::SageMaker::Model::InputConfig::FrameworkVersionHasBeenSet ( ) const
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1042 of file InputConfig.h.

◆ GetDataInputConfig()

const Aws::String& Aws::SageMaker::Model::InputConfig::GetDataInputConfig ( ) const
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 209 of file InputConfig.h.

◆ GetFramework()

const Framework& Aws::SageMaker::Model::InputConfig::GetFramework ( ) const
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 993 of file InputConfig.h.

◆ GetFrameworkVersion()

const Aws::String& Aws::SageMaker::Model::InputConfig::GetFrameworkVersion ( ) const
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1033 of file InputConfig.h.

◆ GetS3Uri()

const Aws::String& Aws::SageMaker::Model::InputConfig::GetS3Uri ( ) const
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 48 of file InputConfig.h.

◆ Jsonize()

Aws::Utils::Json::JsonValue Aws::SageMaker::Model::InputConfig::Jsonize ( ) const

◆ operator=()

InputConfig& Aws::SageMaker::Model::InputConfig::operator= ( Aws::Utils::Json::JsonView  jsonValue)

◆ S3UriHasBeenSet()

bool Aws::SageMaker::Model::InputConfig::S3UriHasBeenSet ( ) const
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 55 of file InputConfig.h.

◆ SetDataInputConfig() [1/3]

void Aws::SageMaker::Model::InputConfig::SetDataInputConfig ( Aws::String &&  value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 542 of file InputConfig.h.

◆ SetDataInputConfig() [2/3]

void Aws::SageMaker::Model::InputConfig::SetDataInputConfig ( const Aws::String value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 431 of file InputConfig.h.

◆ SetDataInputConfig() [3/3]

void Aws::SageMaker::Model::InputConfig::SetDataInputConfig ( const char *  value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 653 of file InputConfig.h.

◆ SetFramework() [1/2]

void Aws::SageMaker::Model::InputConfig::SetFramework ( const Framework value)
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 1005 of file InputConfig.h.

◆ SetFramework() [2/2]

void Aws::SageMaker::Model::InputConfig::SetFramework ( Framework &&  value)
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 1011 of file InputConfig.h.

◆ SetFrameworkVersion() [1/3]

void Aws::SageMaker::Model::InputConfig::SetFrameworkVersion ( Aws::String &&  value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1060 of file InputConfig.h.

◆ SetFrameworkVersion() [2/3]

void Aws::SageMaker::Model::InputConfig::SetFrameworkVersion ( const Aws::String value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1051 of file InputConfig.h.

◆ SetFrameworkVersion() [3/3]

void Aws::SageMaker::Model::InputConfig::SetFrameworkVersion ( const char *  value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1069 of file InputConfig.h.

◆ SetS3Uri() [1/3]

void Aws::SageMaker::Model::InputConfig::SetS3Uri ( Aws::String &&  value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 69 of file InputConfig.h.

◆ SetS3Uri() [2/3]

void Aws::SageMaker::Model::InputConfig::SetS3Uri ( const Aws::String value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 62 of file InputConfig.h.

◆ SetS3Uri() [3/3]

void Aws::SageMaker::Model::InputConfig::SetS3Uri ( const char *  value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 76 of file InputConfig.h.

◆ WithDataInputConfig() [1/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithDataInputConfig ( Aws::String &&  value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 875 of file InputConfig.h.

◆ WithDataInputConfig() [2/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithDataInputConfig ( const Aws::String value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 764 of file InputConfig.h.

◆ WithDataInputConfig() [3/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithDataInputConfig ( const char *  value)
inline

Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input":[1,1024,1024,3]}

      • If using the CLI, {"input":[1,1024,1024,3]}

    • Examples for two inputs:

      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

      • If using the CLI, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"input_1":[1,3,224,224]}

      • If using the CLI, {"input_1":[1,3,224,224]}

    • Examples for two inputs:

      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

      • If using the CLI, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

    • Examples for one input:

      • If using the console, {"data":[1,3,1024,1024]}

      • If using the CLI, {"data":[1,3,1024,1024]}

    • Examples for two inputs:

      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

      • If using the CLI, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

    • Examples for one input in dictionary format:

      • If using the console, {"input0":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224]}

    • Example for one input in list format: [[1,3,224,224]]

    • Examples for two inputs in dictionary format:

      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

      • If using the CLI, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

  • XGBOOST: input data name and shape are not needed.

DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:

    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.

  • bias: If the input type is an Image, you need to provide the bias vector.

  • scale: If the input type is an Image, you need to provide a scale factor.

CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

  • Tensor type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

  • Tensor type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

  • Image type input:

    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

  • Image type input without input name (PyTorch):

    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

    • "CompilerOptions": {"signature_def_key": "serving_custom"}

  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions . For example:

    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

    • "CompilerOptions": {"output_names": ["output_tensor:0"]}

Definition at line 986 of file InputConfig.h.

◆ WithFramework() [1/2]

InputConfig& Aws::SageMaker::Model::InputConfig::WithFramework ( const Framework value)
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 1017 of file InputConfig.h.

◆ WithFramework() [2/2]

InputConfig& Aws::SageMaker::Model::InputConfig::WithFramework ( Framework &&  value)
inline

Identifies the framework in which the model was trained. For example: TENSORFLOW.

Definition at line 1023 of file InputConfig.h.

◆ WithFrameworkVersion() [1/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithFrameworkVersion ( Aws::String &&  value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1087 of file InputConfig.h.

◆ WithFrameworkVersion() [2/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithFrameworkVersion ( const Aws::String value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1078 of file InputConfig.h.

◆ WithFrameworkVersion() [3/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithFrameworkVersion ( const char *  value)
inline

Specifies the framework version to use.

This API field is only supported for PyTorch framework versions 1.4, 1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5, ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.

Definition at line 1096 of file InputConfig.h.

◆ WithS3Uri() [1/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithS3Uri ( Aws::String &&  value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 90 of file InputConfig.h.

◆ WithS3Uri() [2/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithS3Uri ( const Aws::String value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 83 of file InputConfig.h.

◆ WithS3Uri() [3/3]

InputConfig& Aws::SageMaker::Model::InputConfig::WithS3Uri ( const char *  value)
inline

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

Definition at line 97 of file InputConfig.h.


The documentation for this class was generated from the following file: