AWS SDK for C++
1.8.129
AWS SDK for C++
|
#include <CreateModelRequest.h>
Additional Inherited Members | |
![]() | |
virtual void | DumpBodyToUrl (Aws::Http::URI &uri) const |
Definition at line 25 of file CreateModelRequest.h.
Aws::SageMaker::Model::CreateModelRequest::CreateModelRequest | ( | ) |
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 158 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 163 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 324 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 333 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 133 of file CreateModelRequest.h.
|
inline |
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
Definition at line 419 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 188 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 128 of file CreateModelRequest.h.
|
inline |
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
Definition at line 413 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 176 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 44 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 87 of file CreateModelRequest.h.
|
overridevirtual |
Reimplemented from Aws::SageMaker::SageMakerRequest.
|
inlineoverridevirtual |
Implements Aws::AmazonWebServiceRequest.
Definition at line 34 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 270 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 346 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 49 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 94 of file CreateModelRequest.h.
|
overridevirtual |
Convert payload into String.
Implements Aws::AmazonSerializableWebServiceRequest.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 143 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 138 of file CreateModelRequest.h.
|
inline |
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
Definition at line 425 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 212 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 200 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 224 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 59 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 54 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 64 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 101 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 108 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 297 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 288 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 370 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 382 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 279 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 358 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 153 of file CreateModelRequest.h.
|
inline |
Specifies the containers in the inference pipeline.
Definition at line 148 of file CreateModelRequest.h.
|
inline |
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
Definition at line 431 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 248 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 236 of file CreateModelRequest.h.
|
inline |
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
Definition at line 260 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 74 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 69 of file CreateModelRequest.h.
|
inline |
The name of the new model.
Definition at line 79 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 115 of file CreateModelRequest.h.
|
inline |
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
Definition at line 122 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 315 of file CreateModelRequest.h.
|
inline |
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
Definition at line 306 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 394 of file CreateModelRequest.h.
|
inline |
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
Definition at line 406 of file CreateModelRequest.h.