AWS SDK for C++  1.9.125
AWS SDK for C++
Public Member Functions | List of all members
Aws::SageMaker::Model::OutputConfig Class Reference

#include <OutputConfig.h>

Public Member Functions

 OutputConfig ()
 
 OutputConfig (Aws::Utils::Json::JsonView jsonValue)
 
OutputConfigoperator= (Aws::Utils::Json::JsonView jsonValue)
 
Aws::Utils::Json::JsonValue Jsonize () const
 
const Aws::StringGetS3OutputLocation () const
 
bool S3OutputLocationHasBeenSet () const
 
void SetS3OutputLocation (const Aws::String &value)
 
void SetS3OutputLocation (Aws::String &&value)
 
void SetS3OutputLocation (const char *value)
 
OutputConfigWithS3OutputLocation (const Aws::String &value)
 
OutputConfigWithS3OutputLocation (Aws::String &&value)
 
OutputConfigWithS3OutputLocation (const char *value)
 
const TargetDeviceGetTargetDevice () const
 
bool TargetDeviceHasBeenSet () const
 
void SetTargetDevice (const TargetDevice &value)
 
void SetTargetDevice (TargetDevice &&value)
 
OutputConfigWithTargetDevice (const TargetDevice &value)
 
OutputConfigWithTargetDevice (TargetDevice &&value)
 
const TargetPlatformGetTargetPlatform () const
 
bool TargetPlatformHasBeenSet () const
 
void SetTargetPlatform (const TargetPlatform &value)
 
void SetTargetPlatform (TargetPlatform &&value)
 
OutputConfigWithTargetPlatform (const TargetPlatform &value)
 
OutputConfigWithTargetPlatform (TargetPlatform &&value)
 
const Aws::StringGetCompilerOptions () const
 
bool CompilerOptionsHasBeenSet () const
 
void SetCompilerOptions (const Aws::String &value)
 
void SetCompilerOptions (Aws::String &&value)
 
void SetCompilerOptions (const char *value)
 
OutputConfigWithCompilerOptions (const Aws::String &value)
 
OutputConfigWithCompilerOptions (Aws::String &&value)
 
OutputConfigWithCompilerOptions (const char *value)
 
const Aws::StringGetKmsKeyId () const
 
bool KmsKeyIdHasBeenSet () const
 
void SetKmsKeyId (const Aws::String &value)
 
void SetKmsKeyId (Aws::String &&value)
 
void SetKmsKeyId (const char *value)
 
OutputConfigWithKmsKeyId (const Aws::String &value)
 
OutputConfigWithKmsKeyId (Aws::String &&value)
 
OutputConfigWithKmsKeyId (const char *value)
 

Detailed Description

Contains information about the output location for the compiled model and the target device that the model runs on. TargetDevice and TargetPlatform are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from the TargetDevice list, use TargetPlatform to describe the platform of your edge device and CompilerOptions if there are specific settings that are required or recommended to use for particular TargetPlatform.

See Also:

AWS API Reference

Definition at line 40 of file OutputConfig.h.

Constructor & Destructor Documentation

◆ OutputConfig() [1/2]

Aws::SageMaker::Model::OutputConfig::OutputConfig ( )

◆ OutputConfig() [2/2]

Aws::SageMaker::Model::OutputConfig::OutputConfig ( Aws::Utils::Json::JsonView  jsonValue)

Member Function Documentation

◆ CompilerOptionsHasBeenSet()

bool Aws::SageMaker::Model::OutputConfig::CompilerOptionsHasBeenSet ( ) const
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 406 of file OutputConfig.h.

◆ GetCompilerOptions()

const Aws::String& Aws::SageMaker::Model::OutputConfig::GetCompilerOptions ( ) const
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 348 of file OutputConfig.h.

◆ GetKmsKeyId()

const Aws::String& Aws::SageMaker::Model::OutputConfig::GetKmsKeyId ( ) const
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 774 of file OutputConfig.h.

◆ GetS3OutputLocation()

const Aws::String& Aws::SageMaker::Model::OutputConfig::GetS3OutputLocation ( ) const
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 53 of file OutputConfig.h.

◆ GetTargetDevice()

const TargetDevice& Aws::SageMaker::Model::OutputConfig::GetTargetDevice ( ) const
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 104 of file OutputConfig.h.

◆ GetTargetPlatform()

const TargetPlatform& Aws::SageMaker::Model::OutputConfig::GetTargetPlatform ( ) const
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 169 of file OutputConfig.h.

◆ Jsonize()

Aws::Utils::Json::JsonValue Aws::SageMaker::Model::OutputConfig::Jsonize ( ) const

◆ KmsKeyIdHasBeenSet()

bool Aws::SageMaker::Model::OutputConfig::KmsKeyIdHasBeenSet ( ) const
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 793 of file OutputConfig.h.

◆ operator=()

OutputConfig& Aws::SageMaker::Model::OutputConfig::operator= ( Aws::Utils::Json::JsonView  jsonValue)

◆ S3OutputLocationHasBeenSet()

bool Aws::SageMaker::Model::OutputConfig::S3OutputLocationHasBeenSet ( ) const
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 59 of file OutputConfig.h.

◆ SetCompilerOptions() [1/3]

void Aws::SageMaker::Model::OutputConfig::SetCompilerOptions ( Aws::String &&  value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 522 of file OutputConfig.h.

◆ SetCompilerOptions() [2/3]

void Aws::SageMaker::Model::OutputConfig::SetCompilerOptions ( const Aws::String value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 464 of file OutputConfig.h.

◆ SetCompilerOptions() [3/3]

void Aws::SageMaker::Model::OutputConfig::SetCompilerOptions ( const char *  value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 580 of file OutputConfig.h.

◆ SetKmsKeyId() [1/3]

void Aws::SageMaker::Model::OutputConfig::SetKmsKeyId ( Aws::String &&  value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 831 of file OutputConfig.h.

◆ SetKmsKeyId() [2/3]

void Aws::SageMaker::Model::OutputConfig::SetKmsKeyId ( const Aws::String value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 812 of file OutputConfig.h.

◆ SetKmsKeyId() [3/3]

void Aws::SageMaker::Model::OutputConfig::SetKmsKeyId ( const char *  value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 850 of file OutputConfig.h.

◆ SetS3OutputLocation() [1/3]

void Aws::SageMaker::Model::OutputConfig::SetS3OutputLocation ( Aws::String &&  value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 71 of file OutputConfig.h.

◆ SetS3OutputLocation() [2/3]

void Aws::SageMaker::Model::OutputConfig::SetS3OutputLocation ( const Aws::String value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 65 of file OutputConfig.h.

◆ SetS3OutputLocation() [3/3]

void Aws::SageMaker::Model::OutputConfig::SetS3OutputLocation ( const char *  value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 77 of file OutputConfig.h.

◆ SetTargetDevice() [1/2]

void Aws::SageMaker::Model::OutputConfig::SetTargetDevice ( const TargetDevice value)
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 120 of file OutputConfig.h.

◆ SetTargetDevice() [2/2]

void Aws::SageMaker::Model::OutputConfig::SetTargetDevice ( TargetDevice &&  value)
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 128 of file OutputConfig.h.

◆ SetTargetPlatform() [1/2]

void Aws::SageMaker::Model::OutputConfig::SetTargetPlatform ( const TargetPlatform value)
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 217 of file OutputConfig.h.

◆ SetTargetPlatform() [2/2]

void Aws::SageMaker::Model::OutputConfig::SetTargetPlatform ( TargetPlatform &&  value)
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 241 of file OutputConfig.h.

◆ TargetDeviceHasBeenSet()

bool Aws::SageMaker::Model::OutputConfig::TargetDeviceHasBeenSet ( ) const
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 112 of file OutputConfig.h.

◆ TargetPlatformHasBeenSet()

bool Aws::SageMaker::Model::OutputConfig::TargetPlatformHasBeenSet ( ) const
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 193 of file OutputConfig.h.

◆ WithCompilerOptions() [1/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithCompilerOptions ( Aws::String &&  value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 696 of file OutputConfig.h.

◆ WithCompilerOptions() [2/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithCompilerOptions ( const Aws::String value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 638 of file OutputConfig.h.

◆ WithCompilerOptions() [3/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithCompilerOptions ( const char *  value)
inline

Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

    • float32: Use either "float" or "float32".

    • int64: Use either "int64" or "long".

    For example, {"dtype" : "float32"}.

  • CPU: Compilation for CPU supports the following compiler options.

    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

  • ARM: Details of ARM CPU compilations.

    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.

      For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.

    • gpu_code: Specifies the targeted architecture.

    • trt-ver: Specifies the TensorRT versions in x.y.z. format.

    • cuda-ver: Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

  • ANDROID: Compilation for the Android OS supports the following compiler options:

    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.

    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"–verbose 1 –num-neuroncores 2 -O2"".

    For information about supported compiler options, see Neuron Compiler CLI.

  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.

  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:

    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".

    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

Definition at line 754 of file OutputConfig.h.

◆ WithKmsKeyId() [1/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithKmsKeyId ( Aws::String &&  value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 888 of file OutputConfig.h.

◆ WithKmsKeyId() [2/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithKmsKeyId ( const Aws::String value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 869 of file OutputConfig.h.

◆ WithKmsKeyId() [3/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithKmsKeyId ( const char *  value)
inline

The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

The KmsKeyId can be any of the following formats:

  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

  • Alias name: alias/ExampleAlias

  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

Definition at line 907 of file OutputConfig.h.

◆ WithS3OutputLocation() [1/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithS3OutputLocation ( Aws::String &&  value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 89 of file OutputConfig.h.

◆ WithS3OutputLocation() [2/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithS3OutputLocation ( const Aws::String value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 83 of file OutputConfig.h.

◆ WithS3OutputLocation() [3/3]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithS3OutputLocation ( const char *  value)
inline

Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

Definition at line 95 of file OutputConfig.h.

◆ WithTargetDevice() [1/2]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithTargetDevice ( const TargetDevice value)
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 136 of file OutputConfig.h.

◆ WithTargetDevice() [2/2]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithTargetDevice ( TargetDevice &&  value)
inline

Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.

Definition at line 144 of file OutputConfig.h.

◆ WithTargetPlatform() [1/2]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithTargetPlatform ( const TargetPlatform value)
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 265 of file OutputConfig.h.

◆ WithTargetPlatform() [2/2]

OutputConfig& Aws::SageMaker::Model::OutputConfig::WithTargetPlatform ( TargetPlatform &&  value)
inline

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

  • Raspberry Pi 3 Model B+

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},

    "CompilerOptions": {'mattr': ['+neon']}

  • Jetson TX2

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

  • EC2 m5.2xlarge instance OS

    "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},

    "CompilerOptions": {'mcpu': 'skylake-avx512'}

  • RK3399

    "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

  • ARMv7 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},

    "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

  • ARMv8 phone (CPU)

    "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},

    "CompilerOptions": {'ANDROID_PLATFORM': 29}

Definition at line 289 of file OutputConfig.h.


The documentation for this class was generated from the following file: