Interface GetModelInvocationJobResponse.Builder
- All Superinterfaces:
AwsResponse.Builder
,BedrockResponse.Builder
,Buildable
,CopyableBuilder<GetModelInvocationJobResponse.Builder,
,GetModelInvocationJobResponse> SdkBuilder<GetModelInvocationJobResponse.Builder,
,GetModelInvocationJobResponse> SdkPojo
,SdkResponse.Builder
- Enclosing class:
GetModelInvocationJobResponse
-
Method Summary
Modifier and TypeMethodDescriptionclientRequestToken
(String clientRequestToken) A unique, case-sensitive identifier to ensure that the API request completes no more than one time.The time at which the batch inference job ended.inputDataConfig
(Consumer<ModelInvocationJobInputDataConfig.Builder> inputDataConfig) Details about the location of the input to the batch inference job.inputDataConfig
(ModelInvocationJobInputDataConfig inputDataConfig) Details about the location of the input to the batch inference job.The Amazon Resource Name (ARN) of the batch inference job.jobExpirationTime
(Instant jobExpirationTime) The time at which the batch inference job times or timed out.The name of the batch inference job.lastModifiedTime
(Instant lastModifiedTime) The time at which the batch inference job was last modified.If the batch inference job failed, this field contains a message describing why the job failed.The unique identifier of the foundation model used for model inference.outputDataConfig
(Consumer<ModelInvocationJobOutputDataConfig.Builder> outputDataConfig) Details about the location of the output of the batch inference job.outputDataConfig
(ModelInvocationJobOutputDataConfig outputDataConfig) Details about the location of the output of the batch inference job.The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference.The status of the batch inference job.status
(ModelInvocationJobStatus status) The status of the batch inference job.submitTime
(Instant submitTime) The time at which the batch inference job was submitted.timeoutDurationInHours
(Integer timeoutDurationInHours) The number of hours after which batch inference job was set to time out.vpcConfig
(Consumer<VpcConfig.Builder> vpcConfig) The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job.The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job.Methods inherited from interface software.amazon.awssdk.services.bedrock.model.BedrockResponse.Builder
build, responseMetadata, responseMetadata
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFieldNameToField, sdkFields
Methods inherited from interface software.amazon.awssdk.core.SdkResponse.Builder
sdkHttpResponse, sdkHttpResponse
-
Method Details
-
jobArn
The Amazon Resource Name (ARN) of the batch inference job.
- Parameters:
jobArn
- The Amazon Resource Name (ARN) of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
jobName
The name of the batch inference job.
- Parameters:
jobName
- The name of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
modelId
The unique identifier of the foundation model used for model inference.
- Parameters:
modelId
- The unique identifier of the foundation model used for model inference.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
clientRequestToken
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
- Parameters:
clientRequestToken
- A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
roleArn
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
- Parameters:
roleArn
- The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
status
The status of the batch inference job.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
- Parameters:
status
- The status of the batch inference job.The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
-
status
The status of the batch inference job.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
- Parameters:
status
- The status of the batch inference job.The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
-
message
If the batch inference job failed, this field contains a message describing why the job failed.
- Parameters:
message
- If the batch inference job failed, this field contains a message describing why the job failed.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
submitTime
The time at which the batch inference job was submitted.
- Parameters:
submitTime
- The time at which the batch inference job was submitted.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
lastModifiedTime
The time at which the batch inference job was last modified.
- Parameters:
lastModifiedTime
- The time at which the batch inference job was last modified.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
endTime
The time at which the batch inference job ended.
- Parameters:
endTime
- The time at which the batch inference job ended.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputDataConfig
GetModelInvocationJobResponse.Builder inputDataConfig(ModelInvocationJobInputDataConfig inputDataConfig) Details about the location of the input to the batch inference job.
- Parameters:
inputDataConfig
- Details about the location of the input to the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputDataConfig
default GetModelInvocationJobResponse.Builder inputDataConfig(Consumer<ModelInvocationJobInputDataConfig.Builder> inputDataConfig) Details about the location of the input to the batch inference job.
This is a convenience method that creates an instance of theModelInvocationJobInputDataConfig.Builder
avoiding the need to create one manually viaModelInvocationJobInputDataConfig.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed toinputDataConfig(ModelInvocationJobInputDataConfig)
.- Parameters:
inputDataConfig
- a consumer that will call methods onModelInvocationJobInputDataConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
outputDataConfig
GetModelInvocationJobResponse.Builder outputDataConfig(ModelInvocationJobOutputDataConfig outputDataConfig) Details about the location of the output of the batch inference job.
- Parameters:
outputDataConfig
- Details about the location of the output of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
outputDataConfig
default GetModelInvocationJobResponse.Builder outputDataConfig(Consumer<ModelInvocationJobOutputDataConfig.Builder> outputDataConfig) Details about the location of the output of the batch inference job.
This is a convenience method that creates an instance of theModelInvocationJobOutputDataConfig.Builder
avoiding the need to create one manually viaModelInvocationJobOutputDataConfig.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed tooutputDataConfig(ModelInvocationJobOutputDataConfig)
.- Parameters:
outputDataConfig
- a consumer that will call methods onModelInvocationJobOutputDataConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
vpcConfig
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
- Parameters:
vpcConfig
- The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
vpcConfig
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
This is a convenience method that creates an instance of theVpcConfig.Builder
avoiding the need to create one manually viaVpcConfig.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed tovpcConfig(VpcConfig)
.- Parameters:
vpcConfig
- a consumer that will call methods onVpcConfig.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
timeoutDurationInHours
The number of hours after which batch inference job was set to time out.
- Parameters:
timeoutDurationInHours
- The number of hours after which batch inference job was set to time out.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
jobExpirationTime
The time at which the batch inference job times or timed out.
- Parameters:
jobExpirationTime
- The time at which the batch inference job times or timed out.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-