AWS SDK for C++
1.8.156
AWS SDK for C++
|
#include <CreateServiceRequest.h>
Additional Inherited Members | |
![]() | |
virtual void | DumpBodyToUrl (Aws::Http::URI &uri) const |
Definition at line 34 of file CreateServiceRequest.h.
Aws::ECS::Model::CreateServiceRequest::CreateServiceRequest | ( | ) |
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 1058 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 1034 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 587 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 635 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1443 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1450 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1493 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1499 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 713 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 724 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1945 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1966 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 914 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 774 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 62 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1369 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1777 of file CreateServiceRequest.h.
|
inline |
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if schedulingStrategy
is REPLICA
or is not specified. If schedulingStrategy
is DAEMON
then this is not required.
Definition at line 743 of file CreateServiceRequest.h.
|
inline |
Specifies whether to enable Amazon ECS managed tags for the tasks within the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Definition at line 1985 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 890 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 768 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 55 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1363 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1772 of file CreateServiceRequest.h.
|
inline |
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if schedulingStrategy
is REPLICA
or is not specified. If schedulingStrategy
is DAEMON
then this is not required.
Definition at line 734 of file CreateServiceRequest.h.
|
inline |
Specifies whether to enable Amazon ECS managed tags for the tasks within the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Definition at line 1976 of file CreateServiceRequest.h.
|
inline |
The period of time, in seconds, that the Amazon ECS service scheduler should ignore unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0
is used.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
Definition at line 1582 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 820 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 299 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1511 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1401 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1457 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1070 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2013 of file CreateServiceRequest.h.
|
overridevirtual |
Reimplemented from Aws::ECS::ECSRequest.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1174 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1651 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 113 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 647 of file CreateServiceRequest.h.
|
inlineoverridevirtual |
Implements Aws::AmazonWebServiceRequest.
Definition at line 43 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1819 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 180 of file CreateServiceRequest.h.
|
inline |
The period of time, in seconds, that the Amazon ECS service scheduler should ignore unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0
is used.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
Definition at line 1597 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 829 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 347 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1522 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1408 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1463 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1081 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2022 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1200 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1674 of file CreateServiceRequest.h.
|
overridevirtual |
Convert payload into String.
Implements Aws::AmazonSerializableWebServiceRequest.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 121 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 658 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 962 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 938 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 786 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 780 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 792 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 76 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 69 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 83 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1375 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1381 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1782 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1787 of file CreateServiceRequest.h.
|
inline |
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if schedulingStrategy
is REPLICA
or is not specified. If schedulingStrategy
is DAEMON
then this is not required.
Definition at line 752 of file CreateServiceRequest.h.
|
inline |
Specifies whether to enable Amazon ECS managed tags for the tasks within the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Definition at line 1994 of file CreateServiceRequest.h.
|
inline |
The period of time, in seconds, that the Amazon ECS service scheduler should ignore unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0
is used.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
Definition at line 1612 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 838 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 847 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 443 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 395 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1533 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1544 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1422 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1415 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1475 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1469 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1103 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1092 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1114 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2031 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2040 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1252 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1226 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1278 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1697 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1720 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 137 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 129 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 145 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 680 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 669 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1882 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1861 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 210 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 200 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 220 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1840 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 190 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 1010 of file CreateServiceRequest.h.
|
inline |
The capacity provider strategy to use for the service.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Definition at line 986 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 804 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 798 of file CreateServiceRequest.h.
|
inline |
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 32 ASCII characters are allowed.
Definition at line 810 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 97 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 90 of file CreateServiceRequest.h.
|
inline |
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If you do not specify a cluster, the default cluster is assumed.
Definition at line 104 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1387 of file CreateServiceRequest.h.
|
inline |
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Definition at line 1393 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1792 of file CreateServiceRequest.h.
|
inline |
The deployment controller to use for the service.
Definition at line 1797 of file CreateServiceRequest.h.
|
inline |
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if schedulingStrategy
is REPLICA
or is not specified. If schedulingStrategy
is DAEMON
then this is not required.
Definition at line 761 of file CreateServiceRequest.h.
|
inline |
Specifies whether to enable Amazon ECS managed tags for the tasks within the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Definition at line 2003 of file CreateServiceRequest.h.
|
inline |
The period of time, in seconds, that the Amazon ECS service scheduler should ignore unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0
is used.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds. During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
Definition at line 1627 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 856 of file CreateServiceRequest.h.
|
inline |
The launch type on which to run your service. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
Definition at line 865 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 539 of file CreateServiceRequest.h.
|
inline |
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that make use of multiple target groups. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Definition at line 491 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1555 of file CreateServiceRequest.h.
|
inline |
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
Definition at line 1566 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1436 of file CreateServiceRequest.h.
|
inline |
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Definition at line 1429 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1487 of file CreateServiceRequest.h.
|
inline |
The placement strategy objects to use for tasks in your service. You can specify a maximum of five strategy rules per service.
Definition at line 1481 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1136 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1125 of file CreateServiceRequest.h.
|
inline |
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used by default. For more information, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
Definition at line 1147 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2049 of file CreateServiceRequest.h.
|
inline |
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation, use the TagResource API action.
Definition at line 2058 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1330 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1304 of file CreateServiceRequest.h.
|
inline |
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.
Definition at line 1356 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1743 of file CreateServiceRequest.h.
|
inline |
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
Definition at line 1766 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 161 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 153 of file CreateServiceRequest.h.
|
inline |
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
Definition at line 169 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 702 of file CreateServiceRequest.h.
|
inline |
The details of the service discovery registries to assign to this service. For more information, see Service Discovery.
Service discovery is supported for Fargate tasks if you are using platform version v1.1.0 or later. For more information, see AWS Fargate Platform Versions.
Definition at line 691 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1924 of file CreateServiceRequest.h.
|
inline |
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / .
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Definition at line 1903 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 240 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 230 of file CreateServiceRequest.h.
|
inline |
The family
and revision
(family:revision
) or full ARN of the task definition to run in your service. If a revision
is not specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service is using either the ECS
or CODE_DEPLOY
deployment controllers.
Definition at line 250 of file CreateServiceRequest.h.