BatchTransformInput

Input object for the batch transform job.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The Amazon S3 location being used to capture the data.

Link copied to clipboard

The dataset format for your batch transform job.

Link copied to clipboard

If specified, monitoring jobs subtract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

Link copied to clipboard

The attributes of the input data to exclude from the analysis.

Link copied to clipboard

The attributes of the input data that are the input features.

Link copied to clipboard

The attribute of the input data that represents the ground truth label.

Link copied to clipboard

Path to the filesystem where the batch transform data is available to the container.

Link copied to clipboard

In a classification problem, the attribute that represents the class probability.

Link copied to clipboard

The threshold for the class probability to be evaluated as a positive result.

Link copied to clipboard

Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

Link copied to clipboard

Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.

Link copied to clipboard

If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

Functions

Link copied to clipboard
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String