Class StartJobRunRequest
- All Implemented Interfaces:
SdkPojo
,ToCopyableBuilder<StartJobRunRequest.Builder,
StartJobRunRequest>
-
Nested Class Summary
-
Method Summary
Modifier and TypeMethodDescriptionfinal Integer
Deprecated.This property is deprecated, use MaxCapacity instead.The job arguments associated with this run.static StartJobRunRequest.Builder
builder()
final boolean
final boolean
equalsBySdkFields
(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final ExecutionClass
Indicates whether the job is run with a standard or flexible execution class.final String
Indicates whether the job is run with a standard or flexible execution class.final <T> Optional
<T> getValueForField
(String fieldName, Class<T> clazz) Used to retrieve the value of a field from any class that extendsSdkRequest
.final boolean
For responses, this returns true if the service returned a value for the Arguments property.final int
hashCode()
final String
jobName()
The name of the job definition to use.final String
jobRunId()
The ID of a previousJobRun
to retry.final Double
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs.final NotificationProperty
Specifies configuration properties of a job run notification.final Integer
The number of workers of a definedworkerType
that are allocated when a job runs.final String
The name of theSecurityConfiguration
structure to be used with this job run.static Class
<? extends StartJobRunRequest.Builder> final Integer
timeout()
TheJobRun
timeout in minutes.Take this object and create a builder that contains all of the current property values of this object.final String
toString()
Returns a string representation of this object.final WorkerType
The type of predefined worker that is allocated when a job runs.final String
The type of predefined worker that is allocated when a job runs.Methods inherited from class software.amazon.awssdk.awscore.AwsRequest
overrideConfiguration
Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
jobName
The name of the job definition to use.
- Returns:
- The name of the job definition to use.
-
jobRunId
The ID of a previous
JobRun
to retry.- Returns:
- The ID of a previous
JobRun
to retry.
-
hasArguments
public final boolean hasArguments()For responses, this returns true if the service returned a value for the Arguments property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()
method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified. -
arguments
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
Attempts to modify the collection returned by this method will result in an UnsupportedOperationException.
This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the
hasArguments()
method.- Returns:
- The job arguments associated with this run. For this job run, they replace the default arguments set in
the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
-
allocatedCapacity
Deprecated.This property is deprecated, use MaxCapacity instead.This field is deprecated. Use
MaxCapacity
instead.The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
- Returns:
- This field is deprecated. Use
MaxCapacity
instead.The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
-
timeout
The
JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and entersTIMEOUT
status. This value overrides the timeout value set in the parent job.Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
- Returns:
- The
JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and entersTIMEOUT
status. This value overrides the timeout value set in the parent job.Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
-
maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a
Maximum capacity
. Instead, you should specify aWorker type
and theNumber of workers
.Do not set
MaxCapacity
if usingWorkerType
andNumberOfWorkers
.The value that can be allocated for
MaxCapacity
depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:-
When you specify a Python shell job (
JobCommand.Name
="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name
="glueetl") or Apache Spark streaming ETL job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
- Returns:
- For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing
units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power
that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a
Maximum capacity
. Instead, you should specify aWorker type
and theNumber of workers
.Do not set
MaxCapacity
if usingWorkerType
andNumberOfWorkers
.The value that can be allocated for
MaxCapacity
depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:-
When you specify a Python shell job (
JobCommand.Name
="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name
="glueetl") or Apache Spark streaming ETL job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
-
-
-
securityConfiguration
The name of the
SecurityConfiguration
structure to be used with this job run.- Returns:
- The name of the
SecurityConfiguration
structure to be used with this job run.
-
notificationProperty
Specifies configuration properties of a job run notification.
- Returns:
- Specifies configuration properties of a job run notification.
-
workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
If the service returns an enum value that is not available in the current SDK version,
workerType
will returnWorkerType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromworkerTypeAsString()
.- Returns:
- The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X,
G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
-
- See Also:
-
-
workerTypeAsString
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
If the service returns an enum value that is not available in the current SDK version,
workerType
will returnWorkerType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromworkerTypeAsString()
.- Returns:
- The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X,
G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
-
- See Also:
-
-
numberOfWorkers
The number of workers of a defined
workerType
that are allocated when a job runs.- Returns:
- The number of workers of a defined
workerType
that are allocated when a job runs.
-
executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type
glueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs.If the service returns an enum value that is not available in the current SDK version,
executionClass
will returnExecutionClass.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromexecutionClassAsString()
.- Returns:
- Indicates whether the job is run with a standard or flexible execution class. The standard
execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated
resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type
glueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs. - See Also:
-
executionClassAsString
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type
glueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs.If the service returns an enum value that is not available in the current SDK version,
executionClass
will returnExecutionClass.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromexecutionClassAsString()
.- Returns:
- Indicates whether the job is run with a standard or flexible execution class. The standard
execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated
resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type
glueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs. - See Also:
-
toBuilder
Description copied from interface:ToCopyableBuilder
Take this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilder
in interfaceToCopyableBuilder<StartJobRunRequest.Builder,
StartJobRunRequest> - Specified by:
toBuilder
in classGlueRequest
- Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
public final int hashCode()- Overrides:
hashCode
in classAwsRequest
-
equals
- Overrides:
equals
in classAwsRequest
-
equalsBySdkFields
Description copied from interface:SdkPojo
Indicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojo
class, and is generated based on a service model.If an
SdkPojo
class does not have any inherited fields,equalsBySdkFields
andequals
are essentially the same.- Specified by:
equalsBySdkFields
in interfaceSdkPojo
- Parameters:
obj
- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value. -
getValueForField
Description copied from class:SdkRequest
Used to retrieve the value of a field from any class that extendsSdkRequest
. The field name specified should match the member name from the corresponding service-2.json model specified in the codegen-resources folder for a given service. The class specifies what class to cast the returned value to. If the returned value is also a modeled class, theSdkRequest.getValueForField(String, Class)
method will again be available.- Overrides:
getValueForField
in classSdkRequest
- Parameters:
fieldName
- The name of the member to be retrieved.clazz
- The class to cast the returned object to.- Returns:
- Optional containing the casted return value
-
sdkFields
-