Package-level declarations
Types
A structure describing the source of an action.
Lists the properties of an action. An action represents an action or activity. Some examples are a workflow step and a model deployment. Generally, an action involves at least one input artifact or output artifact.
A structure of additional Inference Specification. Additional Inference Specification specifies details about inference jobs that can be run with models based on this model package
Data sources that are available to your model in addition to the one that you specify for ModelDataSource
when you use the CreateModel
action.
A data source used for training or inference that is in addition to the input dataset or model data.
Edge Manager agent version.
The details of the alarm to monitor during the AMI update.
Specifies the training algorithm to use in a CreateTrainingJob request.
Specifies the validation and image scan statuses of the algorithm.
Represents the overall status of an algorithm.
Provides summary information about an algorithm.
Defines a training job and a batch transform job that SageMaker runs to validate your algorithm.
Specifies configurations for one or more training jobs that SageMaker runs to test the algorithm.
A collection of settings that configure the Amazon Q experience within the domain.
Configures how labels are consolidated across human workers and processes output data.
Details about an Amazon SageMaker AI app.
The configuration for running a SageMaker AI image as a KernelGateway app.
Settings that are used to configure and manage the lifecycle of Amazon SageMaker Studio applications.
Configuration to run a processing job in a specified container image.
A structure describing the source of an artifact.
The ID and ID type of an artifact source.
Lists a summary of the properties of an artifact. An artifact represents a URI addressable object or data. Some examples are a dataset and a model.
Lists a summary of the properties of an association. An association is an entity that links other lineage or experiment entities. An example would be an association between a training job and a model.
Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.
Specifies configuration for how an endpoint performs asynchronous inference.
Specifies the configuration for notifications of inference results for asynchronous inference.
Specifies the configuration for asynchronous inference invocation outputs.
Configuration for Athena Dataset Definition input.
The compression used for Athena query results.
The data storage format for Athena query results.
Contains a presigned URL and its associated local file path for downloading hub content artifacts.
The selection of algorithms trained on your dataset to generate the model candidates for an Autopilot job.
Information about a candidate produced by an AutoML training job, including its status, steps, and other properties.
Stores the configuration information for how a candidate is generated (optional).
Information about the steps for a candidate and what step it is working on.
A channel is a named input source that training algorithms can consume. The validation dataset size is limited to less than 2 GB. The training dataset size must be less than 100 GB. For more information, see Channel.
This data type is intended for use exclusively by SageMaker Canvas and cannot be used in other contexts at the moment.
A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see ContainerDefinition.
The data source for the Autopilot job.
This structure specifies how to split the data into train and validation datasets.
The artifacts that are generated during an AutoML job.
A channel is a named input source that training algorithms can consume. This channel is used for AutoML jobs V2 (jobs created by calling CreateAutoMLJobV2).
How long a job is allowed to run, or how many candidates a job is allowed to generate.
A collection of settings used for an AutoML job.
Specifies a metric to minimize or maximize as the objective of an AutoML job.
Metadata for an AutoML job step.
Provides a summary about an AutoML job.
The output data configuration.
The reason for a partial failure of an AutoML job.
A collection of settings specific to the problem type used to configure an AutoML job V2. There must be one and only one config of the following type.
Stores resolved attributes specific to the problem type of an AutoML job V2.
The resolved attributes used to configure an AutoML job V2.
Describes the Amazon S3 data source.
Security options.
The name and an example value of the hyperparameter that you want to use in Autotune. If Automatic model tuning (AMT) determines that your hyperparameter is eligible for Autotune, an optimal hyperparameter range is selected for you.
Automatic rollback configuration for handling endpoint deployment failures and recovery.
Configuration to control how SageMaker captures inference data for batch transform jobs.
Represents an error encountered when deleting a node from a SageMaker HyperPod cluster.
The error code and error description associated with the resource.
Provides summary information about the model package.
Input object for the batch transform job.
A structure that keeps track of which training jobs launched by your hyperparameter tuning job are not improving model performance as evaluated against an objective function.
Update policy for a blue/green deployment. If this update policy is specified, SageMaker creates a new fleet during the deployment while maintaining the old fleet. SageMaker flips traffic to the new fleet according to the specified traffic routing configuration. Only one update policy should be used in the deployment configuration. If no update policy is specified, SageMaker uses a blue/green deployment strategy with all at once traffic shifting by default.
Details on the cache hit of a pipeline execution step.
Metadata about a callback step.
The location of artifacts for an AutoML candidate job.
Stores the configuration information for how model candidates are generated using an AutoML job V2.
The properties of an AutoML candidate job.
The SageMaker Canvas application settings.
Specifies the type and size of the endpoint capacity to activate for a blue/green deployment, a rolling deployment, or a rollback strategy. You can specify your batches as either instance count or the overall percentage or your fleet.
The configuration of the size measurements of the AMI update. Using this configuration, you can specify whether SageMaker should update your instance group by an amount or percentage of instances.
Configuration specifying how to treat different headers. If no headers are specified Amazon SageMaker AI will by default base64 encode when capturing the data.
Specifies data Model Monitor will capture.
Environment parameters you want to benchmark your load test against.
A list of categorical hyperparameters to tune.
Defines the possible values for a categorical hyperparameter.
The CloudFormation template provider configuration for creating infrastructure resources.
A key-value pair that represents a parameter for the CloudFormation stack.
Details about the CloudFormation stack.
A key-value pair representing a parameter used in the CloudFormation stack.
A key-value pair representing a parameter used in the CloudFormation stack.
Details about a CloudFormation template provider configuration and associated provisioning information.
Contains configuration details for updating an existing CloudFormation template provider in the project.
Defines a named input source, called a channel, to be used by an algorithm.
Contains information about the output location for managed spot training checkpoint data.
The container for the metadata for the ClarifyCheck step. For more information, see the topic on ClarifyCheck step in the Amazon SageMaker Developer Guide.
The configuration parameters for the SageMaker Clarify explainer.
The inference configuration parameter for the model container.
The configuration for the SHAP baseline (also called the background or reference dataset) of the Kernal SHAP algorithm.
The configuration for SHAP analysis using SageMaker Clarify Explainer.
A parameter used to configure the SageMaker Clarify explainer to treat text features as text so that explanations are provided for individual units of text. Required only for natural language processing (NLP) explainability.
Defines the configuration for attaching an additional Amazon Elastic Block Store (EBS) volume to each instance of the SageMaker HyperPod cluster instance group. To learn more, see SageMaker HyperPod release notes: June 20, 2024.
Details of an instance group in a SageMaker HyperPod cluster.
The specifications of an instance group that you need to define.
Specifies the placement details for the node in the SageMaker HyperPod cluster, including the Availability Zone and the unique identifier (ID) of the Availability Zone.
Details of an instance in a SageMaker HyperPod cluster.
Defines the configuration for attaching additional storage to the instances in the SageMaker HyperPod cluster instance group. To learn more, see SageMaker HyperPod release notes: June 20, 2024.
The lifecycle configuration for a SageMaker HyperPod cluster.
Details of an instance (also called a node interchangeably) in a SageMaker HyperPod cluster.
Lists a summary of the properties of an instance (also called a node interchangeably) of a SageMaker HyperPod cluster.
The type of orchestrator used for the SageMaker HyperPod cluster.
The configuration settings for the Amazon EKS cluster used as the orchestrator for the SageMaker HyperPod cluster.
The instance group details of the restricted instance group (RIG).
The specifications of a restricted instance group that you need to define.
Summary of the cluster policy.
Lists a summary of the properties of a SageMaker HyperPod cluster.
The configuration for the file system and kernels in a SageMaker image running as a Code Editor app. The FileSystemConfig
object is not supported.
The Code Editor application settings.
A Git repository that SageMaker AI automatically displays to users for cloning in the JupyterServer application.
Specifies summary information about a Git repository.
Use this parameter to configure your Amazon Cognito workforce. A single Cognito workforce is created using and corresponds to a single Amazon Cognito user pool.
Identifies a Amazon Cognito user group. A user group can be used in on or more work teams.
Configuration for your collection.
Configuration information for the Amazon SageMaker Debugger output tensor collections.
A summary of a model compilation job.
Configuration of the compute allocation definition for an entity. This includes the resource sharing option and the setting to preempt low priority tasks.
Configuration of the resources used for the compute allocation definition.
Summary of the compute allocation definition.
The target entity to allocate compute resources to.
Metadata for a Condition step.
There was a conflict when you attempted to modify a SageMaker entity such as an Experiment
or Artifact
.
The configuration used to run the application image container.
Describes the container, as part of model definition.
A structure describing the source of a context.
Lists a summary of the properties of a context. A context provides a logical grouping of other entities.
A list of continuous hyperparameters to tune.
Defines the possible values for a continuous hyperparameter.
A flag to indicating that automatic model tuning (AMT) has detected model convergence, defined as a lack of significant improvement (1% or less) against an objective metric.
Contains configuration details for a template provider. Only one type of template provider can be specified.
A file system, created by you, that you assign to a user profile or space for an Amazon SageMaker AI Domain. Permitted users can access this file system in Amazon SageMaker AI Studio.
The settings for assigning a custom file system to a user profile or space for an Amazon SageMaker AI Domain. Permitted users can access this file system in Amazon SageMaker AI Studio.
A custom SageMaker AI image. For more information, see Bring your own SageMaker AI image.
A customized metric.
Details about the POSIX identity that is used for file system operations.
Configuration to control how SageMaker AI captures inference data.
The currently active data capture configuration used by your Endpoint.
The meta data of the Glue table which serves as data catalog for the OfflineStore
.
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
Information about the container that a data quality monitoring job runs.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
The input for the data quality monitoring job. Currently endpoints are supported for input.
Configuration for Dataset Definition inputs. The Dataset Definition input must specify exactly one of either AthenaDatasetDefinition
or RedshiftDatasetDefinition
types.
Describes the location of the channel data.
Configuration information for the Amazon SageMaker Debugger hook parameters, metric and tensor collections, and storage paths. To learn more about how to configure the DebugHookConfig
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.
Configuration information for SageMaker Debugger rules for debugging. To learn more about how to configure the DebugRuleConfiguration
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.
Information about the status of the rule evaluation.
A collection of default EBS storage settings that apply to spaces created within a domain or user profile.
The default settings for shared spaces that users create in the domain.
The default storage settings for a space.
Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant.
The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.
The configuration to use when updating the AMI versions.
A set of recommended deployment configurations for the model. To get more advanced recommendations, see CreateInferenceRecommendationsJob to create an inference recommendation job.
Contains information about a stage in an edge deployment plan.
Contains information summarizing the deployment stage results.
Information that SageMaker Neo automatically derived about the model.
Specifies weight and capacity values for a production variant.
Contains information summarizing device details and deployment status.
Summary of the device fleet.
Contains information about the configurations of selected devices.
Status of devices.
Summary of the device.
The model deployment settings for the SageMaker Canvas application.
A collection of settings that configure the domain's Docker interaction.
The domain's details.
A collection of settings that apply to the SageMaker Domain
. These settings are specified through the CreateDomain
API call.
A collection of Domain
configuration settings to update.
Represents the drift check baselines that can be used when the model monitor is set using the model package.
Represents the drift check bias baselines that can be used when the model monitor is set using the model package.
Represents the drift check explainability baselines that can be used when the model monitor is set using the model package.
Represents the drift check data quality baselines that can be used when the model monitor is set using the model package.
Represents the drift check model quality baselines that can be used when the model monitor is set using the model package.
An object with the recommended values for you to specify when creating an autoscaling policy.
A collection of EBS storage settings that apply to both private and shared spaces.
The EC2 capacity reservations that are shared to an ML capacity reservation.
Contains information about the configuration of a deployment.
Contains information about the configuration of a model in a deployment.
Contains information summarizing an edge deployment plan.
Contains information summarizing the deployment stage results.
Status of edge devices with this model.
Summary of model on edge device.
The output configuration.
Summary of edge packaging job.
The output of a SageMaker Edge Manager deployable resource.
A file system, created by you in Amazon EFS, that you assign to a user profile or space for an Amazon SageMaker AI Domain. Permitted users can access this file system in Amazon SageMaker AI Studio.
The settings for assigning a custom Amazon EFS file system to a user profile or space for an Amazon SageMaker AI Domain.
This data type is intended for use exclusively by SageMaker Canvas and cannot be used in other contexts at the moment.
The settings for running Amazon EMR Serverless jobs in SageMaker Canvas.
The configuration parameters that specify the IAM roles assumed by the execution role of SageMaker (assumable roles) and the cluster instances or job execution environments (execution roles or runtime roles) to manage and access resources required for running Amazon EMR clusters or Amazon EMR Serverless applications.
The configurations and outcomes of an Amazon EMR step execution.
Metadata for an endpoint configuration step.
Provides summary information for an endpoint configuration.
Details about a customer endpoint that was compared in an Inference Recommender job.
Input object for the endpoint
The endpoint configuration for the load test.
The metadata of the endpoint.
The endpoint configuration made by Inference Recommender during a recommendation job.
The performance results from running an Inference Recommender job on an existing endpoint.
Metadata for an endpoint step.
Provides summary information for an endpoint.
The configuration for the restricted instance groups (RIG) environment.
The configuration details for the restricted instance groups (RIG) environment.
A list of environment parameters suggested by the Amazon SageMaker Inference Recommender.
Specifies the range of environment parameters
The properties of an experiment as returned by the Search API. For information about experiments, see the CreateExperiment API.
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
The source of the experiment.
A summary of the properties of an experiment. To get the complete set of properties, call the DescribeExperiment API and provide the ExperimentName
.
Contains explainability metrics for a model.
A parameter to activate explainers.
The container for the metadata for Fail step.
A list of features. You must include FeatureName
and FeatureType
. Valid feature FeatureType
s are Integral
, Fractional
and String
.
Amazon SageMaker Feature Store stores features in a collection called Feature Group. A Feature Group can be visualized as a table which has rows, with a unique identifier for each row where each column in the table is a feature. In principle, a Feature Group is composed of features and values per features.
The name, ARN, CreationTime
, FeatureGroup
values, LastUpdatedTime
and EnableOnlineStorage
status of a FeatureGroup
.
The metadata for a feature. It can either be metadata that you specify, or metadata that is updated automatically.
A key-value pair that you specify to describe the feature.
Contains details regarding the file source.
The Amazon Elastic File System storage configuration for a SageMaker AI image.
Specifies a file system data source for a channel.
The best candidate result from an AutoML training job.
Shows the latest objective metric emitted by a training job that was launched by a hyperparameter tuning job. You define the objective metric in the HyperParameterTuningJobObjective
parameter of HyperParameterTuningJobConfig.
Contains information about where human output will be stored.
Contains summary information about the flow definition.
Configuration settings for an Amazon FSx for Lustre file system to be used with the cluster.
A custom file system in Amazon FSx for Lustre.
The settings for assigning a custom Amazon FSx for Lustre file system to a user profile or space for an Amazon SageMaker Domain.
The generative AI settings for the SageMaker Canvas application.
Specifies configuration details for a Git repository when the repository is updated.
The SageMaker images that are hidden from the Studio user interface. You must specify the SageMaker image name and version aliases.
Stores the holiday featurization attributes applicable to each item of time-series datasets during the training of a forecasting model. This allows the model to identify patterns associated with specific holidays.
The configuration for a private hub model reference that points to a public SageMaker JumpStart model.
Any dependencies related to hub content, such as scripts, model artifacts, datasets, or notebooks.
Information about hub content.
The Amazon S3 storage configuration of a hub.
Defines under what conditions SageMaker creates a human loop. Used within CreateFlowDefinition. See HumanLoopActivationConditionsConfig for the required format of activation conditions.
Provides information about how and under what conditions SageMaker creates a human loop. If HumanLoopActivationConfig
is not given, then all requests go to humans.
Describes the work to be performed by human workers.
Container for configuring the source of human task requests.
Information required for human workers to complete a labeling task.
Container for human task user interface information.
The configuration for Hyperband
, a multi-fidelity based hyperparameter tuning strategy. Hyperband
uses the final and intermediate results of a training job to dynamically allocate resources to utilized hyperparameter configurations while automatically stopping under-performing configurations. This parameter should be provided only if Hyperband
is selected as the StrategyConfig
under the HyperParameterTuningJobConfig
API.
Specifies which training algorithm to use for training jobs that a hyperparameter tuning job launches and the metrics to monitor.
Defines a hyperparameter to be used by an algorithm.
Defines the training jobs launched by a hyperparameter tuning job.
The container for the summary information about a training job.
The configuration for hyperparameter tuning resources for use in training jobs launched by the tuning job. These resources include compute instances and storage volumes. Specify one or more compute instance configurations and allocation strategies to select resources (optional).
A structure that contains runtime information about both current and completed hyperparameter tuning jobs.
Configures a hyperparameter tuning job.
The total resources consumed by your hyperparameter tuning job.
Defines the objective metric for a hyperparameter tuning job. Hyperparameter tuning uses the value of this metric to evaluate the training jobs it launches, and returns the training job that results in either the highest or lowest value for this metric, depending on the value you specify for the Type
parameter. If you want to define a custom objective metric, see Define metrics and environment variables.
An entity returned by the SearchRecord API containing the properties of a hyperparameter tuning job.
The configuration for a training job launched by a hyperparameter tuning job. Choose Bayesian
for Bayesian optimization, and Random
for random search optimization. For more advanced use cases, use Hyperband
, which evaluates objective metrics for training jobs after every epoch. For more information about strategies, see How Hyperparameter Tuning Works.
The strategy hyperparameter tuning uses to find the best combination of hyperparameters for your model.
Provides summary information about a hyperparameter tuning job.
Specifies the configuration for a hyperparameter tuning job that uses one or more previous hyperparameter tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.
The configuration of resources, including compute instances and storage volumes for use in training jobs launched by hyperparameter tuning jobs. HyperParameterTuningResourceConfig
is similar to ResourceConfig
, but has the additional InstanceConfigs
and AllocationStrategy
fields to allow for flexible instance management. Specify one or more instance types, count, and the allocation strategy for instance selection.
The IAM Identity details associated with the user. These details are associated with model package groups, model packages and project entities only.
Use this parameter to specify a supported global condition key that is added to the IAM policy.
The Amazon SageMaker Canvas application setting where you configure OAuth for connecting to an external data source, such as Snowflake.
Settings related to idle shutdown of Studio applications.
The collection of settings used by an AutoML job V2 for the image classification problem type.
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
A version of a SageMaker AI Image
. A version represents an existing container image.
Specifies the type and size of the endpoint capacity to activate for a rolling deployment or a rollback strategy. You can specify your batches as either of the following:
Defines the compute resources to allocate to run a model, plus any adapter models, that you assign to an inference component. These resources include CPU cores, accelerators, and memory.
Defines a container that provides the runtime environment for a model that you deploy with an inference component.
Details about the resources that are deployed with this inference component.
The deployment configuration for an endpoint that hosts inference components. The configuration includes the desired deployment strategy and rollback settings.
Specifies a rolling deployment strategy for updating a SageMaker AI inference component.
Runtime settings for a model that is deployed with an inference component.
Details about the runtime settings for the model that is deployed with the inference component.
Details about the resources to deploy with this inference component, including the model, container, and compute resources.
Details about the resources that are deployed with this inference component.
Settings that take effect while the model container starts up.
A summary of the properties of an inference component.
Specifies details about how containers in a multi-container endpoint are run.
The Amazon S3 location and configuration for storing inference request and response data.
The start and end times of an inference experiment.
Lists a summary of properties of an inference experiment.
Configuration information specifying which hub contents have accessible deployment options.
The metrics for an existing endpoint compared in an Inference Recommender job.
A list of recommendations made by Amazon SageMaker Inference Recommender.
A structure that contains a list of recommendation jobs.
A returned array object for the Steps
response field in the ListInferenceRecommendationsJobSteps API command.
Defines how to perform inference generation after a training job is run.
Configuration information for the infrastructure health check of a training job. A SageMaker-provided health check tests the health of instance hardware and cluster network connectivity.
Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.
Defines an instance group for heterogeneous cluster training. When requesting a training job using the CreateTrainingJob API, you can configure multiple instance groups .
Information on the IMDS configuration of the notebook instance
For a hyperparameter of the integer type, specifies the range that a hyperparameter tuning job searches.
Defines the possible values for an integer hyperparameter.
The configuration for the file system and kernels in a SageMaker AI image running as a JupyterLab app. The FileSystemConfig
object is not supported.
The settings for the JupyterLab application.
The JupyterServer app settings.
The Amazon SageMaker Canvas application setting where you configure document querying.
The KernelGateway app settings.
The configuration for the file system and kernels in a SageMaker AI image running as a KernelGateway app.
The specification of a Jupyter kernel.
Provides a breakdown of the number of objects labeled.
Provides counts for human-labeled tasks in the labeling job.
Provides configuration information for auto-labeling of your data objects. A LabelingJobAlgorithmsConfig
object must be supplied in order to use auto-labeling.
Attributes of the data specified by the customer. Use these to describe the data to be labeled.
Provides information about the location of input data.
Provides summary information for a work team.
Input configuration information for a labeling job.
Specifies the location of the output produced by the labeling job.
Output configuration information for a labeling job.
Configure encryption on the storage volume attached to the ML compute instance used to run automated data labeling model training and inference.
The Amazon S3 location of the input data objects.
An Amazon SNS data source used for streaming labeling jobs.
A set of conditions for stopping a labeling job. If any of the conditions are met, the job is automatically stopped. You can use these conditions to control the cost of data labeling.
Provides summary information about a labeling job.
Metadata for a Lambda step.
A value that indicates whether the update was successful.
Lists a summary of the properties of a lineage group. A lineage group provides a group of shareable lineage entity resources.
Defines an Amazon Cognito or your own OIDC IdP user group that is part of a work team.
Metadata properties of the tracking entity, trial, or trial component.
The name, value, and date and time of a metric that was emitted to Amazon CloudWatch.
Information about the metric for a candidate produced by an AutoML job.
Specifies a metric that the training algorithm writes to stderr
or stdout
. You can view these logs to understand how your training job performs and check for any errors encountered during training. SageMaker hyperparameter tuning captures all defined metrics. Specify one of the defined metrics to use as an objective metric using the TuningObjective parameter in the HyperParameterTrainingJobDefinition
API to evaluate job performance during hyperparameter tuning.
An object containing information about a metric.
Details about the metrics source.
The access configuration file to control access to the ML model. You can explicitly accept the model end-user license agreement (EULA) within the ModelAccessConfig
.
Provides information about the location that is configured for storing model artifacts.
Docker container image configuration object for the model bias job.
The configuration for a baseline model bias job.
Inputs for the model bias job.
The artifacts of the model card export job.
Attribute by which to sort returned export jobs.
The summary of the Amazon SageMaker Model Card export job.
Configure the export output details for an Amazon SageMaker Model Card.
Configure the security settings to protect model card data.
A summary of the model card.
A summary of a specific version of the model card.
Configures the timeout and maximum number of retries for processing a transform job invocation.
Settings for the model compilation technique that's applied by a model optimization job.
Defines the model configuration. Includes the specification name and environment parameters.
An endpoint that hosts a model displayed in the Amazon SageMaker Model Dashboard.
An alert action taken to light up an icon on the Amazon SageMaker Model Dashboard when an alert goes into InAlert
status.
A model displayed in the Amazon SageMaker Model Dashboard.
The model card for a model displayed in the Amazon SageMaker Model Dashboard.
A monitoring schedule for a model displayed in the Amazon SageMaker Model Dashboard.
Data quality constraints and statistics for a model.
Specifies the location of ML model data to deploy. If specified, you must specify one and only one of the available data sources.
Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.
Provides information about the endpoint of the model deployment.
Provides information to verify the integrity of stored model artifacts.
Docker container image configuration object for the model explainability job.
The configuration for a baseline model explainability job.
Inputs for the model explainability job.
The configuration for the infrastructure that the model will be deployed to.
Input object for the model.
The model latency threshold.
A structure describing the current state of the model in its life cycle.
Part of the search expression. You can specify the name and value (domain, task, framework, framework version, task, and model).
One or more filters that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results
A summary of the model metadata.
Contains metrics captured from a model.
A container for your trained model that can be deployed for SageMaker inference. This can include inference code, artifacts, and metadata. The model package type can be one of the following.
Describes the Docker container for the model package.
A group of versioned models in the Model Registry.
Summary information about a model group.
The model card associated with the model package. Since ModelPackageModelCard
is tied to a model package, it is a specific usage of a model card and its schema is simplified compared to the schema of ModelCard
. The ModelPackageModelCard
schema does not include model_package_details
, and model_overview
is composed of the model_creator
and model_artifact
properties. For more information about the model package model card schema, see Model package model card schema. For more information about the model card associated with the model package, see View the Details of a Model Version.
An optional Key Management Service key to encrypt, decrypt, and re-encrypt model package information for regulated workloads with highly sensitive data.
Specifies the validation and image scan statuses of the model package.
Represents the overall status of a model package.
Provides summary information about a model package.
Contains data, such as the inputs and targeted instance types that are used in the process of validating the model package.
Specifies batch transform jobs that SageMaker runs to validate your model package.
Model quality statistics and constraints.
Container image configuration object for the monitoring job.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
The input for the model quality monitoring job. Currently endpoints are supported for input for model quality monitoring jobs.
Settings for the model quantization technique that's applied by a model optimization job.
The model registry settings for the SageMaker Canvas application.
Settings for the model sharding technique that's applied by a model optimization job.
Metadata for Model steps.
Provides summary information about a model.
Contains information about the deployment options of a model.
Summary of the deployment configuration of a model.
A list of alert actions taken in response to an alert going into InAlert
status.
Provides summary information of an alert's history.
Provides summary information about a monitor alert.
Container image configuration object for the monitoring job.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
Configuration for the cluster used to run model monitoring jobs.
The constraints resource for a monitoring job.
Represents the CSV dataset format used when running a monitoring job.
Represents the dataset format used when running a monitoring job.
Summary of information about the last monitoring job to run.
The ground truth labels for the dataset used for the monitoring job.
The inputs for a monitoring job.
Defines the monitoring job.
Summary information about a monitoring job.
Represents the JSON dataset format used when running a monitoring job.
The networking configuration for the monitoring job.
The output object for a monitoring job.
The output configuration for monitoring jobs.
Represents the Parquet dataset format used when running a monitoring job.
Identifies the resources to deploy for a monitoring job.
Information about where and how you want to store the results of a monitoring job.
A schedule for a model monitoring job. For information about model monitor, see Amazon SageMaker Model Monitor.
Configures the monitoring schedule and defines the monitoring job.
Summarizes the monitoring schedule.
The statistics resource for a monitoring job.
A time limit for how long the monitoring job is allowed to run before stopping.
Specifies additional configuration for hosting multi-model endpoints.
The VpcConfig configuration object that specifies the VPC that you want the compilation jobs to connect to. For more information on controlling access to your Amazon S3 buckets used for compilation job, see Give Amazon SageMaker AI Compilation Jobs Access to Resources in Your Amazon VPC.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Provides a summary of a notebook instance lifecycle configuration.
Contains the notebook instance lifecycle configuration script.
Provides summary information for an SageMaker AI notebook instance.
Configures Amazon SNS notifications of available or expiring work items for work teams.
Specifies the number of training jobs that this hyperparameter tuning job launched, categorized by the status of their objective metric. The objective metric status shows whether the final objective metric for the training job has been evaluated by the tuning job and used in the hyperparameter tuning process.
The configuration of an OfflineStore
.
The status of OfflineStore
.
Use this parameter to configure your OIDC Identity Provider (IdP).
Your OIDC IdP workforce configuration.
A list of user groups that exist in your OIDC Identity Provider (IdP). One to ten groups can be used to create a single private work team. When you add a user group to the list of Groups
, you can add that user group to one or more private work teams. If you add a user group to a private work team, all workers in that user group are added to the work team.
Use this to specify the Amazon Web Services Key Management Service (KMS) Key ID, or KMSKeyId
, for at rest data encryption. You can turn OnlineStore
on or off by specifying the EnableOnlineStore
flag at General Assembly.
Updates the feature group online store configuration.
The security configuration for OnlineStore
.
Settings for an optimization technique that you apply with a model optimization job.
The location of the source model to optimize with an optimization job.
The Amazon S3 location of a source model to optimize with an optimization job.
Details for where to store the optimized model that you create with the optimization job.
Summarizes an optimization job by providing some of its key properties.
The access configuration settings for the source ML model for an optimization job, where you can accept the model end-user license agreement (EULA).
Output values produced by an optimization job.
A VPC in Amazon VPC that's accessible to an optimized that you create with an optimization job. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.
Contains information about the output location for the compiled model and the target device that the model runs on. TargetDevice
and TargetPlatform
are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from the TargetDevice
list, use TargetPlatform
to describe the platform of your edge device and CompilerOptions
if there are specific settings that are required or recommended to use for particular TargetPlatform.
Provides information about how to store model training results (model artifacts).
An output parameter of a pipeline step.
The collection of ownership settings for a space.
Specifies summary information about the ownership settings.
Configuration that controls the parallelism of the pipeline. By default, the parallelism configuration specified applies to all executions of the pipeline unless overridden.
Defines the possible values for categorical, continuous, and integer hyperparameters to be used by an algorithm.
Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches. The hyperparameter tuning job launches training jobs with hyperparameter values within these ranges to find the combination of values that result in the training job with the best performance as measured by the objective metric of the hyperparameter tuning job.
A previously completed or stopped hyperparameter tuning job to be used as a starting point for a new hyperparameter tuning job.
Configuration settings for the SageMaker Partner AI App.
Maintenance configuration settings for the SageMaker Partner AI App.
A subset of information related to a SageMaker Partner AI App. This information is used as part of the ListPartnerApps
API response.
The summary of an in-progress deployment when an endpoint is creating or updating with a new endpoint configuration.
The production variant summary for a deployment when an endpoint is creating or updating with the CreateEndpoint or UpdateEndpoint operations. Describes the VariantStatus
, weight and capacity for a production variant associated with an endpoint.
The location of the pipeline definition stored in Amazon S3.
An execution of a pipeline.
An execution of a step in a pipeline.
Metadata for a step execution.
A pipeline execution summary.
Specifies the names of the experiment and trial created by a pipeline.
A summary of a pipeline.
The version of the pipeline.
The summary of the pipeline version.
A specification for a predefined metric.
Configuration for accessing hub content through presigned URLs, including license agreement acceptance and URL validation settings.
Priority class configuration. When included in PriorityClasses
, these class configurations define how tasks are queued.
Configuration for the cluster used to run a processing job.
Configuration for processing job outputs in Amazon SageMaker Feature Store.
The inputs for a processing job. The processing input must specify exactly one of either S3Input
or DatasetDefinition
types.
An Amazon SageMaker processing job that is used to analyze data and evaluate models. For more information, see Process Data and Evaluate Models.
Metadata for a processing job step.
Summary of information about a processing job.
Describes the results of a processing job. The processing output must specify exactly one of either S3Output
or FeatureStoreOutput
types.
Configuration for uploading output from the processing container.
Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
Configuration for downloading input data from Amazon S3 into the processing container.
Configuration for uploading output data to Amazon S3 from the processing container.
Configures conditions under which the processing job should be stopped, such as how long the processing job has been running. After the condition is met, the processing job is stopped.
Identifies a model that you want to host and the resources chosen to deploy for hosting it. If you are deploying multiple models, tell SageMaker how to distribute traffic among the models by specifying variant weights. For more information on production variants, check Production variants.
Settings for the capacity reservation for the compute instances that SageMaker AI reserves for an endpoint.
Details about an ML capacity reservation.
Specifies configuration for a core dump from the model container when the process crashes.
Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
Specifies the serverless configuration for an endpoint variant.
Specifies the serverless update concurrency configuration for an endpoint variant.
Describes the status of the production variant.
Describes weight and capacities for a production variant associated with an endpoint. If you sent a request to the UpdateEndpointWeightsAndCapacities
API and the endpoint status is Updating
, you get different desired and current values.
Configuration information for Amazon SageMaker Debugger system monitoring, framework profiling, and storage paths.
Configuration information for updating the Amazon SageMaker Debugger profile parameters, system and framework metrics configurations, and storage paths.
Configuration information for profiling rules.
Information about the status of the rule evaluation.
Information about a project.
Part of the SuggestionQuery
type. Specifies a hint for retrieving property names that begin with the specified text.
A property name returned from a GetSearchSuggestions
call that specifies a value in the PropertyNameQuery
field.
A key value pair used when you provision a project as a service catalog product. For information, see What is Amazon Web Services Service Catalog.
Defines the amount of money paid to an Amazon Mechanical Turk worker for each task performed.
Container for the metadata for a Quality check step. For more information, see the topic on QualityCheck step in the Amazon SageMaker Developer Guide.
A set of filters to narrow the set of lineage entities connected to the StartArn
(s) returned by the QueryLineage
API action.
The infrastructure configuration for deploying the model to a real-time inference endpoint.
The recommended configuration to use for Real-Time Inference.
Provides information about the output configuration for the compiled model.
Specifies mandatory fields for running an Inference Recommender job directly in the CreateInferenceRecommendationsJob API. The fields specified in ContainerConfig
override the corresponding fields in the model package. Use ContainerConfig
if you want to specify these fields for the recommendation job but don't want to edit them in your model package.
The details for a specific benchmark from an Inference Recommender job.
The input configuration of the recommendation job.
Provides information about the output configuration for the compiled model.
The configuration for the payload for a recommendation job.
Specifies the maximum number of jobs that can run in parallel and the maximum number of jobs that can run.
Specifies conditions for stopping a job. When a job reaches a stopping condition limit, SageMaker ends the job.
Inference Recommender provisions SageMaker endpoints with access to VPC in the inference recommendation job.
The metrics of recommendations.
Configuration for Redshift Dataset Definition input.
The compression used for Redshift query results.
The data storage format for Redshift query results.
Metadata for a register model job step.
Configuration for remote debugging for the CreateTrainingJob API. To learn more about the remote debugging functionality of SageMaker, see Access a training container through Amazon Web Services Systems Manager (SSM) for remote debugging.
Configuration for remote debugging for the UpdateTrainingJob API. To learn more about the remote debugging functionality of SageMaker, see Access a training container through Amazon Web Services Systems Manager (SSM) for remote debugging.
Contains input values for a task.
A description of an error that occurred while rendering the template.
Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc
as the value for the RepositoryAccessMode
field of the ImageConfig
object that you passed to a call to CreateModel
and the private Docker registry where the model image is hosted requires authentication.
Details about a reserved capacity offering for a training plan offering.
Details of a reserved capacity for the training plan.
The resolved attributes.
A resource catalog containing all of the resources of a specific resource type within a resource owner account. For an example on sharing the Amazon SageMaker Feature Store DefaultFeatureGroupCatalog
, see Share Amazon SageMaker Catalog resource type in the Amazon SageMaker Developer Guide.
Describes the resources, including machine learning (ML) compute instances and ML storage volumes, to use for model training.
The ResourceConfig
to update KeepAlivePeriodInSeconds
. Other fields in the ResourceConfig
cannot be updated.
Resource being accessed is in use.
You have exceeded an SageMaker resource limit. For example, you might have too many training jobs created.
Specifies the maximum number of training jobs and parallel training jobs that a hyperparameter tuning job can launch.
Resource being access is not found.
Resource sharing configuration.
Specifies the ARN's of a SageMaker AI image and SageMaker AI image version, and the instance type that the version runs on.
The retention policy for data stored on an Amazon Elastic File System volume.
The retry strategy to use when a training job fails due to an InternalServerError
. RetryStrategy
is specified as part of the CreateTrainingJob
and CreateHyperParameterTuningJob
requests. You can add the StoppingCondition
parameter to the request to limit the training time for the complete job.
The configurations that SageMaker uses when updating the AMI versions.
Specifies a rolling deployment strategy for updating a SageMaker endpoint.
A collection of settings that apply to an RSessionGateway
app.
A collection of settings that configure user interaction with the RStudioServerPro
app.
A collection of settings that configure the RStudioServerPro
Domain-level app.
A collection of settings that update the current configuration for the RStudioServerPro
Domain-level app.
Describes the S3 data source.
Specifies the S3 location of ML model data to deploy.
The Amazon Simple Storage (Amazon S3) location and security configuration for OfflineStore
.
Base class for all service related exceptions thrown by the SageMaker client
An object containing a recommended scaling policy.
The metric for a scaling policy.
An object where you specify the anticipated traffic pattern for an endpoint.
Configuration details about the monitoring schedule.
The configuration object of the schedule that SageMaker follows when updating the AMI.
Cluster policy configuration. This policy is used for task prioritization and fair-share allocation. This helps prioritize critical workloads and distributes idle compute across entities.
A multi-expression that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results. You must specify at least one subexpression, filter, or nested filter. A SearchExpression
can contain up to twenty elements.
A single resource returned as part of the Search API response.
An array element of SecondaryStatusTransitions
for DescribeTrainingJob. It provides additional details about a status that the training job has transitioned through. A training job can be in one of several states, for example, starting, downloading, training, or uploading. Within each state, there are a number of intermediate states. For example, within the starting state, SageMaker could be starting the training job or launching the ML instances. These transitional states are referred to as the job's secondary status.
A step selected to run in selective execution mode.
The selective execution configuration applied to the pipeline run.
The ARN from an execution of the current pipeline.
Details of a provisioned service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.
Details that you specify to provision a service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.
Details that you specify to provision a service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.
Contains information about attribute-based access control (ABAC) for a training job. The session chaining configuration uses Amazon Security Token Service (STS) for your training job to request temporary, limited-privilege credentials to tenants. For more information, see Attribute-based access control (ABAC) for multi-tenancy training.
The configuration of ShadowMode
inference experiment type, which specifies a production variant to take all the inference requests, and a shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also specifies the percentage of requests that Amazon SageMaker replicates.
The name and sampling percentage of a shadow variant.
Specifies options for sharing Amazon SageMaker AI Studio notebooks. These settings are specified as part of DefaultUserSettings
when the CreateDomain
API is called, and as part of UserSettings
when the CreateUserProfile
API is called. When SharingSettings
is not specified, notebook sharing isn't allowed.
A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for S3DataType
, the results of the S3 key prefix matches are shuffled. If you use ManifestFile
, the order of the S3 object references in the ManifestFile
is shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
Specifies an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your SageMaker account or an algorithm in Amazon Web Services Marketplace that you are subscribed to.
A list of algorithms that were used to create a model package.
A list of IP address ranges (CIDRs). Used to create an allow list of IP addresses for a private workforce. Workers will only be able to log in to their worker portal from an IP address within this range. By default, a workforce isn't restricted to specific IP addresses.
Settings that are used to configure and manage the lifecycle of Amazon SageMaker Studio applications in a space.
The application settings for a Code Editor space.
The space's details.
Settings related to idle shutdown of Studio applications in a space.
The settings for the JupyterLab application within a space.
A collection of space settings.
Specifies summary information about the space settings.
A collection of space sharing settings.
Specifies summary information about the space sharing settings.
The storage settings for a space.
Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
Details of the Amazon SageMaker AI Studio Lifecycle Configuration.
Studio settings. If these settings are applied on a user level, they take priority over the settings applied on a domain level.
Describes a work team of a vendor that does the labelling job.
Specified in the GetSearchSuggestions request. Limits the property names that are included in the response.
The collection of settings used by an AutoML job V2 for the tabular problem type.
The resolved attributes specific to the tabular problem type.
Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice
.
A target tracking scaling policy. Includes support for predefined or customized metrics.
Details about a template provider configuration and associated provisioning information.
The TensorBoard app settings.
Configuration of storage locations for the Amazon SageMaker Debugger TensorBoard output data.
The collection of settings used by an AutoML job V2 for the text classification problem type.
The collection of settings used by an AutoML job V2 for the text generation problem type.
The resolved attributes specific to the text generation problem type.
Used to set feature group throughput configuration. There are two modes: ON_DEMAND
and PROVISIONED
. With on-demand mode, you are charged for data reads and writes that your application performs on your feature group. You do not need to specify read and write throughput because Feature Store accommodates your workloads as they ramp up and down. You can switch a feature group to on-demand only once in a 24 hour period. With provisioned throughput mode, you specify the read and write capacity per second that you expect your application to require, and you are billed based on those limits. Exceeding provisioned throughput will result in your requests being throttled.
Active throughput configuration of the feature group. There are two modes: ON_DEMAND
and PROVISIONED
. With on-demand mode, you are charged for data reads and writes that your application performs on your feature group. You do not need to specify read and write throughput because Feature Store accommodates your workloads as they ramp up and down. You can switch a feature group to on-demand only once in a 24 hour period. With provisioned throughput mode, you specify the read and write capacity per second that you expect your application to require, and you are billed based on those limits. Exceeding provisioned throughput will result in your requests being throttled.
The new throughput configuration for the feature group. You can switch between on-demand and provisioned modes or update the read / write capacity of provisioned feature groups. You can switch a feature group to on-demand only once in a 24 hour period.
The collection of components that defines the time-series.
The collection of settings used by an AutoML job V2 for the time-series forecasting problem type.
Time series forecast settings for the SageMaker Canvas application.
Transformations allowed on the dataset. Supported transformations are Filling
and Aggregation
. Filling
specifies how to add values to missing values in the dataset. Aggregation
defines how to aggregate data that does not align with forecast frequency.
The summary of the tracking server to list.
Defines the traffic pattern of the load test.
Defines the traffic routing strategy during an endpoint deployment to shift traffic from the old fleet to the new fleet.
The configuration to use an image from a private Docker registry for a training job.
The training input mode that the algorithm supports. For more information about input modes, see Algorithms.
Contains information about a training job.
Defines the input needed to run a training job using the algorithm.
The numbers of training jobs launched by a hyperparameter tuning job, categorized by status.
Metadata for a training job step.
Provides summary information about a training job.
A filter to apply when listing or searching for training plans.
Details about a training plan offering.
Details of the training plan.
An object containing authentication information for a private Docker registry.
Defines how the algorithm is used for a training job.
Describes the location of the channel data.
Describes the input source of a transform job and the way the transform job consumes it.
A batch transform job. For information about SageMaker batch transform, see Use Batch Transform.
Defines the input needed to run a transform job using the inference specification specified in the algorithm.
Metadata for a transform job step.
Provides a summary of a transform job. Multiple TransformJobSummary
objects are returned as a list after in response to a ListTransformJobs call.
Describes the results of a transform job.
Describes the resources, including ML instance types and ML instance count, to use for transform job.
Describes the S3 data source.
The properties of a trial component as returned by the Search API.
Represents an input or output artifact of a trial component. You specify TrialComponentArtifact
as part of the InputArtifacts
and OutputArtifacts
parameters in the CreateTrialComponent request.
A summary of the metrics of a trial component.
The value of a hyperparameter. Only one of NumberValue
or StringValue
can be specified.
A short summary of a trial component.
The Amazon Resource Name (ARN) and job type of the source of a trial component.
Detailed information about the source of a trial component. Either ProcessingJob
or TrainingJob
is returned.
The status of the trial component.
A summary of the properties of a trial component. To get all the properties, call the DescribeTrialComponent API and provide the TrialComponentName
.
The source of the trial.
A summary of the properties of a trial. To get the complete set of properties, call the DescribeTrial API and provide the TrialName
.
Time to live duration, where the record is hard deleted after the expiration time is reached; ExpiresAt
= EventTime
+ TtlDuration
. For information on HardDelete, see the DeleteRecord API in the Amazon SageMaker API Reference guide.
The job completion criteria.
Metadata for a tuning step.
The Liquid template for the worker user interface.
Container for user interface template information.
The settings that apply to an Amazon SageMaker AI domain when you use it in Amazon SageMaker Unified Studio.
The configuration that describes specifications of the instance groups to update.
Contains configuration details for updating an existing template provider in the project.
Information about the user who created or modified a SageMaker resource.
The user profile details.
A collection of settings that apply to users in a domain. These settings are specified when the CreateUserProfile
API is called, and as DefaultUserSettings
when the CreateDomain
API is called.
Specifies a production variant property type for an Endpoint.
Configuration for your vector collection type.
The list of key-value pairs used to filter your search results. If a search result contains a key from your list, it is included in the final search response if the value associated with the key in the result matches the value you specified. If the value doesn't match, the result is excluded from the search response. Any resources that don't have a key from the list that you've provided will also be included in the search response.
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.
Status and billing information about the warm pool.
Use this optional parameter to constrain access to an Amazon S3 resource based on the IP address using supported IAM global condition keys. The Amazon S3 resource is accessed in the worker portal using a Amazon S3 presigned URL.
A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each Amazon Web Services Region. By default, any workforce-related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.
The VPC object you use to create or update a workforce.
A VpcConfig object that specifies the VPC that you want your workforce to connect to.
The workspace settings for the SageMaker Canvas application.