Interface InferenceExecutionSummary.Builder
- All Superinterfaces:
- Buildable,- CopyableBuilder<InferenceExecutionSummary.Builder,,- InferenceExecutionSummary> - SdkBuilder<InferenceExecutionSummary.Builder,,- InferenceExecutionSummary> - SdkPojo
- Enclosing class:
- InferenceExecutionSummary
- 
Method SummaryModifier and TypeMethodDescriptioncustomerResultObject(Consumer<S3Object.Builder> customerResultObject) The S3 object that the inference execution results were uploaded to.customerResultObject(S3Object customerResultObject) The S3 object that the inference execution results were uploaded to.dataEndTime(Instant dataEndTime) Indicates the time reference in the dataset at which the inference execution stopped.dataInputConfiguration(Consumer<InferenceInputConfiguration.Builder> dataInputConfiguration) Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.dataInputConfiguration(InferenceInputConfiguration dataInputConfiguration) Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.dataOutputConfiguration(Consumer<InferenceOutputConfiguration.Builder> dataOutputConfiguration) Specifies configuration information for the output results from for the inference execution, including the output Amazon S3 location.dataOutputConfiguration(InferenceOutputConfiguration dataOutputConfiguration) Specifies configuration information for the output results from for the inference execution, including the output Amazon S3 location.dataStartTime(Instant dataStartTime) Indicates the time reference in the dataset at which the inference execution began.failedReason(String failedReason) Specifies the reason for failure when an inference execution has failed.inferenceSchedulerArn(String inferenceSchedulerArn) The Amazon Resource Name (ARN) of the inference scheduler being used for the inference execution.inferenceSchedulerName(String inferenceSchedulerName) The name of the inference scheduler being used for the inference execution.The Amazon Resource Name (ARN) of the machine learning model used for the inference execution.The name of the machine learning model being used for the inference execution.modelVersion(Long modelVersion) The model version used for the inference execution.modelVersionArn(String modelVersionArn) The Amazon Resource Number (ARN) of the model version used for the inference execution.scheduledStartTime(Instant scheduledStartTime) Indicates the start time at which the inference scheduler began the specific inference execution.Indicates the status of the inference execution.status(InferenceExecutionStatus status) Indicates the status of the inference execution.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuildercopyMethods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilderapplyMutation, buildMethods inherited from interface software.amazon.awssdk.core.SdkPojoequalsBySdkFields, sdkFieldNameToField, sdkFields
- 
Method Details- 
modelNameThe name of the machine learning model being used for the inference execution. - Parameters:
- modelName- The name of the machine learning model being used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
modelArnThe Amazon Resource Name (ARN) of the machine learning model used for the inference execution. - Parameters:
- modelArn- The Amazon Resource Name (ARN) of the machine learning model used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
inferenceSchedulerNameThe name of the inference scheduler being used for the inference execution. - Parameters:
- inferenceSchedulerName- The name of the inference scheduler being used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
inferenceSchedulerArnThe Amazon Resource Name (ARN) of the inference scheduler being used for the inference execution. - Parameters:
- inferenceSchedulerArn- The Amazon Resource Name (ARN) of the inference scheduler being used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
scheduledStartTimeIndicates the start time at which the inference scheduler began the specific inference execution. - Parameters:
- scheduledStartTime- Indicates the start time at which the inference scheduler began the specific inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
dataStartTimeIndicates the time reference in the dataset at which the inference execution began. - Parameters:
- dataStartTime- Indicates the time reference in the dataset at which the inference execution began.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
dataEndTimeIndicates the time reference in the dataset at which the inference execution stopped. - Parameters:
- dataEndTime- Indicates the time reference in the dataset at which the inference execution stopped.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
dataInputConfigurationInferenceExecutionSummary.Builder dataInputConfiguration(InferenceInputConfiguration dataInputConfiguration) Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location. - Parameters:
- dataInputConfiguration- Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
dataInputConfigurationdefault InferenceExecutionSummary.Builder dataInputConfiguration(Consumer<InferenceInputConfiguration.Builder> dataInputConfiguration) Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location. This is a convenience method that creates an instance of theInferenceInputConfiguration.Builderavoiding the need to create one manually viaInferenceInputConfiguration.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed todataInputConfiguration(InferenceInputConfiguration).- Parameters:
- dataInputConfiguration- a consumer that will call methods on- InferenceInputConfiguration.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
dataOutputConfigurationInferenceExecutionSummary.Builder dataOutputConfiguration(InferenceOutputConfiguration dataOutputConfiguration) Specifies configuration information for the output results from for the inference execution, including the output Amazon S3 location. - Parameters:
- dataOutputConfiguration- Specifies configuration information for the output results from for the inference execution, including the output Amazon S3 location.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
dataOutputConfigurationdefault InferenceExecutionSummary.Builder dataOutputConfiguration(Consumer<InferenceOutputConfiguration.Builder> dataOutputConfiguration) Specifies configuration information for the output results from for the inference execution, including the output Amazon S3 location. This is a convenience method that creates an instance of theInferenceOutputConfiguration.Builderavoiding the need to create one manually viaInferenceOutputConfiguration.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed todataOutputConfiguration(InferenceOutputConfiguration).- Parameters:
- dataOutputConfiguration- a consumer that will call methods on- InferenceOutputConfiguration.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
customerResultObjectThe S3 object that the inference execution results were uploaded to. - Parameters:
- customerResultObject- The S3 object that the inference execution results were uploaded to.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
customerResultObjectdefault InferenceExecutionSummary.Builder customerResultObject(Consumer<S3Object.Builder> customerResultObject) The S3 object that the inference execution results were uploaded to. This is a convenience method that creates an instance of theS3Object.Builderavoiding the need to create one manually viaS3Object.builder().When the Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tocustomerResultObject(S3Object).- Parameters:
- customerResultObject- a consumer that will call methods on- S3Object.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
statusIndicates the status of the inference execution. - Parameters:
- status- Indicates the status of the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
statusIndicates the status of the inference execution. - Parameters:
- status- Indicates the status of the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
 
- 
failedReasonSpecifies the reason for failure when an inference execution has failed. - Parameters:
- failedReason- Specifies the reason for failure when an inference execution has failed.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
modelVersionThe model version used for the inference execution. - Parameters:
- modelVersion- The model version used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
- 
modelVersionArnThe Amazon Resource Number (ARN) of the model version used for the inference execution. - Parameters:
- modelVersionArn- The Amazon Resource Number (ARN) of the model version used for the inference execution.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
 
 
-