Package software.amazon.awssdk.services.sagemaker.model
package software.amazon.awssdk.services.sagemaker.model
-
ClassDescriptionA structure describing the source of an action.Lists the properties of an action.A structure of additional Inference Specification.A data source used for training or inference that is in addition to the input dataset or model data.Edge Manager agent version.An Amazon CloudWatch alarm configured to monitor metrics on an endpoint.Specifies the training algorithm to use in a CreateTrainingJob request.Specifies the validation and image scan statuses of the algorithm.Represents the overall status of an algorithm.Provides summary information about an algorithm.Defines a training job and a batch transform job that SageMaker runs to validate your algorithm.Specifies configurations for one or more training jobs that SageMaker runs to test the algorithm.Configures how labels are consolidated across human workers and processes output data.Details about an Amazon SageMaker app.The configuration for running a SageMaker image as a KernelGateway app.Configuration to run a processing job in a specified container image.A structure describing the source of an artifact.The ID and ID type of an artifact source.Lists a summary of the properties of an artifact.Lists a summary of the properties of an association.Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.Specifies configuration for how an endpoint performs asynchronous inference.Specifies the configuration for notifications of inference results for asynchronous inference.Specifies the configuration for asynchronous inference invocation outputs.Configuration for Athena Dataset Definition input.The compression used for Athena query results.The data storage format for Athena query results.The collection of algorithms run on a dataset for training the model candidates of an Autopilot job.Information about a candidate produced by an AutoML training job, including its status, steps, and other properties.Stores the configuration information for how a candidate is generated (optional).Information about the steps for a candidate and what step it is working on.A channel is a named input source that training algorithms can consume.A list of container definitions that describe the different containers that make up an AutoML candidate.The data source for the Autopilot job.This structure specifies how to split the data into train and validation datasets.The artifacts that are generated during an AutoML job.A channel is a named input source that training algorithms can consume.How long a job is allowed to run, or how many candidates a job is allowed to generate.A collection of settings used for an AutoML job.Specifies a metric to minimize or maximize as the objective of an AutoML job.Metadata for an AutoML job step.Provides a summary about an AutoML job.The output data configuration.The reason for a partial failure of an AutoML job.A collection of settings specific to the problem type used to configure an AutoML job V2.Stores resolved attributes specific to the problem type of an AutoML job V2.The resolved attributes used to configure an AutoML job V2.Describes the Amazon S3 data source.Security options.The name and an example value of the hyperparameter that you want to use in Autotune.Automatic rollback configuration for handling endpoint deployment failures and recovery.A flag to indicate if you want to use Autotune to automatically find optimal values for the following fields:Configuration to control how SageMaker captures inference data for batch transform jobs.The error code and error description associated with the resource.Provides summary information about the model package.Input object for the batch transform job.A structure that keeps track of which training jobs launched by your hyperparameter tuning job are not improving model performance as evaluated against an objective function.Contains bias metrics for a model.Update policy for a blue/green deployment.Details on the cache hit of a pipeline execution step.Metadata about a callback step.The location of artifacts for an AutoML candidate job.Stores the configuration information for how model candidates are generated using an AutoML job V2.The properties of an AutoML candidate job.The SageMaker Canvas application settings.Specifies the type and size of the endpoint capacity to activate for a blue/green deployment, a rolling deployment, or a rollback strategy.Configuration specifying how to treat different headers.Specifies data Model Monitor will capture.Environment parameters you want to benchmark your load test against.A list of categorical hyperparameters to tune.Defines the possible values for a categorical hyperparameter.A channel is a named input source that training algorithms can consume.Defines a named input source, called a channel, to be used by an algorithm.Contains information about the output location for managed spot training checkpoint data.The container for the metadata for the ClarifyCheck step.The configuration parameters for the SageMaker Clarify explainer.The inference configuration parameter for the model container.The configuration for the SHAP baseline (also called the background or reference dataset) of the Kernal SHAP algorithm.The configuration for SHAP analysis using SageMaker Clarify Explainer.A parameter used to configure the SageMaker Clarify explainer to treat text features as text so that explanations are provided for individual units of text.Details of an instance group in a SageMaker HyperPod cluster.The specifications of an instance group that you need to define.Details of an instance in a SageMaker HyperPod cluster.The LifeCycle configuration for a SageMaker HyperPod cluster.Details of an instance (also called a node interchangeably) in a SageMaker HyperPod cluster.Lists a summary of the properties of an instance (also called a node interchangeably) of a SageMaker HyperPod cluster.Lists a summary of the properties of a SageMaker HyperPod cluster.The Code Editor application settings.A Git repository that SageMaker automatically displays to users for cloning in the JupyterServer application.Specifies summary information about a Git repository.Use this parameter to configure your Amazon Cognito workforce.Identifies a Amazon Cognito user group.Configuration for your collection.Configuration information for the Amazon SageMaker Debugger output tensor collections.A summary of a model compilation job.Metadata for a Condition step.There was a conflict when you attempted to modify a SageMaker entity such as an
Experiment
orArtifact
.The configuration used to run the application image container.Describes the container, as part of model definition.A structure describing the source of a context.Lists a summary of the properties of a context.A list of continuous hyperparameters to tune.Defines the possible values for a continuous hyperparameter.A flag to indicating that automatic model tuning (AMT) has detected model convergence, defined as a lack of significant improvement (1% or less) against an objective metric.A file system, created by you, that you assign to a user profile or space for an Amazon SageMaker Domain.The settings for assigning a custom file system to a user profile or space for an Amazon SageMaker Domain.A custom SageMaker image.A customized metric.Details about the POSIX identity that is used for file system operations.Configuration to control how SageMaker captures inference data.The currently active data capture configuration used by your Endpoint.The meta data of the Glue table which serves as data catalog for theOfflineStore
.The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output.Information about the container that a data quality monitoring job runs.Configuration for monitoring constraints and monitoring statistics.The input for the data quality monitoring job.Configuration for Dataset Definition inputs.Describes the location of the channel data.Configuration information for the Amazon SageMaker Debugger hook parameters, metric and tensor collections, and storage paths.Configuration information for SageMaker Debugger rules for debugging.Information about the status of the rule evaluation.A collection of default EBS storage settings that applies to private spaces created within a domain or user profile.A collection of settings that apply to spaces created in the domain.The default storage settings for a private space.Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant.The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.A set of recommended deployment configurations for the model.Contains information about a stage in an edge deployment plan.Contains information summarizing the deployment stage results.Information that SageMaker Neo automatically derived about the model.Specifies weight and capacity values for a production variant.Information of a particular device.Contains information summarizing device details and deployment status.Summary of the device fleet.Contains information about the configurations of selected devices.Status of devices.Summary of the device.The model deployment settings for the SageMaker Canvas application.A collection of settings that configure the domain's Docker interaction.The domain's details.A collection of settings that apply to theSageMaker Domain
.A collection ofDomain
configuration settings to update.Represents the drift check baselines that can be used when the model monitor is set using the model package.Represents the drift check bias baselines that can be used when the model monitor is set using the model package.Represents the drift check explainability baselines that can be used when the model monitor is set using the model package.Represents the drift check data quality baselines that can be used when the model monitor is set using the model package.Represents the drift check model quality baselines that can be used when the model monitor is set using the model package.An object with the recommended values for you to specify when creating an autoscaling policy.A collection of EBS storage settings that applies to private spaces.A directed edge connecting two lineage entities.Contains information about the configuration of a deployment.Contains information about the configuration of a model in a deployment.Contains information summarizing an edge deployment plan.Contains information summarizing the deployment stage results.The model on the edge device.Status of edge devices with this model.Summary of model on edge device.The output configuration.Summary of edge packaging job.The output of a SageMaker Edge Manager deployable resource.A file system, created by you in Amazon EFS, that you assign to a user profile or space for an Amazon SageMaker Domain.The settings for assigning a custom Amazon EFS file system to a user profile or space for an Amazon SageMaker Domain.The configurations and outcomes of an Amazon EMR step execution.A hosted endpoint for real-time inference.Provides summary information for an endpoint configuration.Details about a customer endpoint that was compared in an Inference Recommender job.Input object for the endpointThe endpoint configuration for the load test.The metadata of the endpoint.The endpoint configuration made by Inference Recommender during a recommendation job.The performance results from running an Inference Recommender job on an existing endpoint.Provides summary information for an endpoint.A list of environment parameters suggested by the Amazon SageMaker Inference Recommender.Specifies the range of environment parametersThe properties of an experiment as returned by the Search API.Associates a SageMaker job as a trial component with an experiment and trial.The source of the experiment.A summary of the properties of an experiment.Contains explainability metrics for a model.A parameter to activate explainers.The container for the metadata for Fail step.A list of features.Amazon SageMaker Feature Store stores features in a collection called Feature Group.The name, ARN,CreationTime
,FeatureGroup
values,LastUpdatedTime
andEnableOnlineStorage
status of aFeatureGroup
.The metadata for a feature.A key-value pair that you specify to describe the feature.Contains details regarding the file source.The Amazon Elastic File System storage configuration for a SageMaker image.Specifies a file system data source for a channel.A conditional statement for a search expression that includes a resource property, a Boolean operator, and a value.The best candidate result from an AutoML training job.Shows the latest objective metric emitted by a training job that was launched by a hyperparameter tuning job.Contains information about where human output will be stored.Contains summary information about the flow definition.The generative AI settings for the SageMaker Canvas application.Specifies configuration details for a Git repository in your Amazon Web Services account.Specifies configuration details for a Git repository when the repository is updated.Stores the holiday featurization attributes applicable to each item of time-series datasets during the training of a forecasting model.Any dependencies related to hub content, such as scripts, model artifacts, datasets, or notebooks.Information about hub content.Information about a hub.The Amazon S3 storage configuration of a hub.Defines under what conditions SageMaker creates a human loop.Provides information about how and under what conditions SageMaker creates a human loop.Describes the work to be performed by human workers.Container for configuring the source of human task requests.Information required for human workers to complete a labeling task.Container for human task user interface information.The configuration forHyperband
, a multi-fidelity based hyperparameter tuning strategy.Specifies which training algorithm to use for training jobs that a hyperparameter tuning job launches and the metrics to monitor.Defines a hyperparameter to be used by an algorithm.Defines the training jobs launched by a hyperparameter tuning job.The container for the summary information about a training job.The configuration for hyperparameter tuning resources for use in training jobs launched by the tuning job.A structure that contains runtime information about both current and completed hyperparameter tuning jobs.Configures a hyperparameter tuning job.The total resources consumed by your hyperparameter tuning job.Defines the objective metric for a hyperparameter tuning job.An entity returned by the SearchRecord API containing the properties of a hyperparameter tuning job.The configuration for a training job launched by a hyperparameter tuning job.The strategy hyperparameter tuning uses to find the best combination of hyperparameters for your model.Provides summary information about a hyperparameter tuning job.Specifies the configuration for a hyperparameter tuning job that uses one or more previous hyperparameter tuning jobs as a starting point.The configuration of resources, including compute instances and storage volumes for use in training jobs launched by hyperparameter tuning jobs.The IAM Identity details associated with the user.The Amazon SageMaker Canvas application setting where you configure OAuth for connecting to an external data source, such as Snowflake.A SageMaker image.The collection of settings used by an AutoML job V2 for the image classification problem type.Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).A version of a SageMakerImage
.Defines the compute resources to allocate to run a model that you assign to an inference component.Defines a container that provides the runtime environment for a model that you deploy with an inference component.Details about the resources that are deployed with this inference component.Runtime settings for a model that is deployed with an inference component.Details about the runtime settings for the model that is deployed with the inference component.Details about the resources to deploy with this inference component, including the model, container, and compute resources.Details about the resources that are deployed with this inference component.Settings that take effect while the model container starts up.A summary of the properties of an inference component.Specifies details about how containers in a multi-container endpoint are run.The Amazon S3 location and configuration for storing inference request and response data.The start and end times of an inference experiment.Lists a summary of properties of an inference experiment.The metrics for an existing endpoint compared in an Inference Recommender job.A list of recommendations made by Amazon SageMaker Inference Recommender.A structure that contains a list of recommendation jobs.A returned array object for theSteps
response field in the ListInferenceRecommendationsJobSteps API command.Defines how to perform inference generation after a training job is run.Configuration information for the infrastructure health check of a training job.Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.Defines an instance group for heterogeneous cluster training.Information on the IMDS configuration of the notebook instanceFor a hyperparameter of the integer type, specifies the range that a hyperparameter tuning job searches.Defines the possible values for an integer hyperparameter.The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app.The settings for the JupyterLab application.The JupyterServer app settings.The Amazon SageMaker Canvas application setting where you configure document querying.The KernelGateway app settings.The configuration for the file system and kernels in a SageMaker image running as a KernelGateway app.The specification of a Jupyter kernel.Provides a breakdown of the number of objects labeled.Provides counts for human-labeled tasks in the labeling job.Provides configuration information for auto-labeling of your data objects.Attributes of the data specified by the customer.Provides information about the location of input data.Provides summary information for a work team.Input configuration information for a labeling job.Specifies the location of the output produced by the labeling job.Output configuration information for a labeling job.Configure encryption on the storage volume attached to the ML compute instance used to run automated data labeling model training and inference.The Amazon S3 location of the input data objects.An Amazon SNS data source used for streaming labeling jobs.A set of conditions for stopping a labeling job.Provides summary information about a labeling job.Metadata for a Lambda step.A value that indicates whether the update was successful.Lists a summary of the properties of a lineage group.Defines an Amazon Cognito or your own OIDC IdP user group that is part of a work team.Metadata properties of the tracking entity, trial, or trial component.The name, value, and date and time of a metric that was emitted to Amazon CloudWatch.Information about the metric for a candidate produced by an AutoML job.Specifies a metric that the training algorithm writes tostderr
orstdout
.An object containing information about a metric.Details about the metrics source.The properties of a model as returned by the Search API.The access configuration file to control access to the ML model.Provides information about the location that is configured for storing model artifacts.Docker container image configuration object for the model bias job.The configuration for a baseline model bias job.Inputs for the model bias job.An Amazon SageMaker Model Card.The artifacts of the model card export job.Attribute by which to sort returned export jobs.The summary of the Amazon SageMaker Model Card export job.Configure the export output details for an Amazon SageMaker Model Card.Configure the security settings to protect model card data.A summary of the model card.A summary of a specific version of the model card.Configures the timeout and maximum number of retries for processing a transform job invocation.Defines the model configuration.An endpoint that hosts a model displayed in the Amazon SageMaker Model Dashboard.An alert action taken to light up an icon on the Amazon SageMaker Model Dashboard when an alert goes intoInAlert
status.A model displayed in the Amazon SageMaker Model Dashboard.The model card for a model displayed in the Amazon SageMaker Model Dashboard.A monitoring schedule for a model displayed in the Amazon SageMaker Model Dashboard.Data quality constraints and statistics for a model.Specifies the location of ML model data to deploy.Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.Provides information about the endpoint of the model deployment.Provides information to verify the integrity of stored model artifacts.Docker container image configuration object for the model explainability job.The configuration for a baseline model explainability job.Inputs for the model explainability job.The configuration for the infrastructure that the model will be deployed to.Input object for the model.The model latency threshold.Part of the search expression.One or more filters that searches for the specified resource or resources in a search.A summary of the model metadata.Contains metrics captured from a model.A versioned model that can be deployed for SageMaker inference.Describes the Docker container for the model package.A group of versioned models in the model registry.Summary information about a model group.Specifies the validation and image scan statuses of the model package.Represents the overall status of a model package.Provides summary information about a model package.Contains data, such as the inputs and targeted instance types that are used in the process of validating the model package.Specifies batch transform jobs that SageMaker runs to validate your model package.Model quality statistics and constraints.Container image configuration object for the monitoring job.Configuration for monitoring constraints and monitoring statistics.The input for the model quality monitoring job.The model registry settings for the SageMaker Canvas application.Metadata for Model steps.Provides summary information about a model.Contains information about the deployment options of a model.Summary of the deployment configuration of a model.A list of alert actions taken in response to an alert going intoInAlert
status.Provides summary information of an alert's history.Provides summary information about a monitor alert.Container image configuration object for the monitoring job.Configuration for monitoring constraints and monitoring statistics.Configuration for the cluster used to run model monitoring jobs.The constraints resource for a monitoring job.Represents the CSV dataset format used when running a monitoring job.Represents the dataset format used when running a monitoring job.Summary of information about the last monitoring job to run.The ground truth labels for the dataset used for the monitoring job.The inputs for a monitoring job.Defines the monitoring job.Summary information about a monitoring job.Represents the JSON dataset format used when running a monitoring job.The networking configuration for the monitoring job.The output object for a monitoring job.The output configuration for monitoring jobs.Represents the Parquet dataset format used when running a monitoring job.Identifies the resources to deploy for a monitoring job.Information about where and how you want to store the results of a monitoring job.A schedule for a model monitoring job.Configures the monitoring schedule and defines the monitoring job.Summarizes the monitoring schedule.The statistics resource for a monitoring job.A time limit for how long the monitoring job is allowed to run before stopping.Specifies additional configuration for hosting multi-model endpoints.The VpcConfig configuration object that specifies the VPC that you want the compilation jobs to connect to.A list of nested Filter objects.Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.Provides a summary of a notebook instance lifecycle configuration.Contains the notebook instance lifecycle configuration script.Provides summary information for an SageMaker notebook instance.Configures Amazon SNS notifications of available or expiring work items for work teams.Specifies the number of training jobs that this hyperparameter tuning job launched, categorized by the status of their objective metric.The configuration of anOfflineStore
.The status ofOfflineStore
.Use this parameter to configure your OIDC Identity Provider (IdP).Your OIDC IdP workforce configuration.A list of user groups that exist in your OIDC Identity Provider (IdP).Use this to specify the Amazon Web Services Key Management Service (KMS) Key ID, orKMSKeyId
, for at rest data encryption.Updates the feature group online store configuration.The security configuration forOnlineStore
.Contains information about the output location for the compiled model and the target device that the model runs on.Provides information about how to store model training results (model artifacts).An output parameter of a pipeline step.The collection of ownership settings for a space.Specifies summary information about the ownership settings.Configuration that controls the parallelism of the pipeline.Assigns a value to a named Pipeline parameter.Defines the possible values for categorical, continuous, and integer hyperparameters to be used by an algorithm.Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches.The trial that a trial component is associated with and the experiment the trial is part of.A previously completed or stopped hyperparameter tuning job to be used as a starting point for a new hyperparameter tuning job.The summary of an in-progress deployment when an endpoint is creating or updating with a new endpoint configuration.The production variant summary for a deployment when an endpoint is creating or updating with the CreateEndpoint or UpdateEndpoint operations.Defines the traffic pattern.A SageMaker Model Building Pipeline instance.The location of the pipeline definition stored in Amazon S3.An execution of a pipeline.An execution of a step in a pipeline.Metadata for a step execution.A pipeline execution summary.Specifies the names of the experiment and trial created by a pipeline.A summary of a pipeline.A specification for a predefined metric.Configuration for the cluster used to run a processing job.Configuration for processing job outputs in Amazon SageMaker Feature Store.The inputs for a processing job.An Amazon SageMaker processing job that is used to analyze data and evaluate models.Metadata for a processing job step.Summary of information about a processing job.Describes the results of a processing job.Configuration for uploading output from the processing container.Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job.Configuration for downloading input data from Amazon S3 into the processing container.Configuration for uploading output data to Amazon S3 from the processing container.Configures conditions under which the processing job should be stopped, such as how long the processing job has been running.Identifies a model that you want to host and the resources chosen to deploy for hosting it.Specifies configuration for a core dump from the model container when the process crashes.Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.Specifies the serverless configuration for an endpoint variant.Specifies the serverless update concurrency configuration for an endpoint variant.Describes the status of the production variant.Describes weight and capacities for a production variant associated with an endpoint.Configuration information for Amazon SageMaker Debugger system monitoring, framework profiling, and storage paths.Configuration information for updating the Amazon SageMaker Debugger profile parameters, system and framework metrics configurations, and storage paths.Configuration information for profiling rules.Information about the status of the rule evaluation.The properties of a project as returned by the Search API.Information about a project.Part of theSuggestionQuery
type.A property name returned from aGetSearchSuggestions
call that specifies a value in thePropertyNameQuery
field.A key value pair used when you provision a project as a service catalog product.Defines the amount of money paid to an Amazon Mechanical Turk worker for each task performed.Container for the metadata for a Quality check step.A set of filters to narrow the set of lineage entities connected to theStartArn
(s) returned by theQueryLineage
API action.The infrastructure configuration for deploying the model to a real-time inference endpoint.The recommended configuration to use for Real-Time Inference.Provides information about the output configuration for the compiled model.Specifies mandatory fields for running an Inference Recommender job directly in the CreateInferenceRecommendationsJob API.The details for a specific benchmark from an Inference Recommender job.The input configuration of the recommendation job.Provides information about the output configuration for the compiled model.The configuration for the payload for a recommendation job.Specifies the maximum number of jobs that can run in parallel and the maximum number of jobs that can run.Specifies conditions for stopping a job.Inference Recommender provisions SageMaker endpoints with access to VPC in the inference recommendation job.The metrics of recommendations.Configuration for Redshift Dataset Definition input.The compression used for Redshift query results.The data storage format for Redshift query results.Metadata for a register model job step.Configuration for remote debugging for the CreateTrainingJob API.Configuration for remote debugging for the UpdateTrainingJob API.Contains input values for a task.A description of an error that occurred while rendering the template.Specifies an authentication configuration for the private docker registry where your model image is hosted.The resolved attributes.A resource catalog containing all of the resources of a specific resource type within a resource owner account.Describes the resources, including machine learning (ML) compute instances and ML storage volumes, to use for model training.TheResourceConfig
to updateKeepAlivePeriodInSeconds
.Resource being accessed is in use.You have exceeded an SageMaker resource limit.Specifies the maximum number of training jobs and parallel training jobs that a hyperparameter tuning job can launch.Resource being access is not found.Specifies the ARN's of a SageMaker image and SageMaker image version, and the instance type that the version runs on.The retention policy for data stored on an Amazon Elastic File System volume.The retry strategy to use when a training job fails due to anInternalServerError
.Specifies a rolling deployment strategy for updating a SageMaker endpoint.A collection of settings that apply to anRSessionGateway
app.A collection of settings that configure user interaction with theRStudioServerPro
app.A collection of settings that configure theRStudioServerPro
Domain-level app.A collection of settings that update the current configuration for theRStudioServerPro
Domain-level app.Describes the S3 data source.Specifies the S3 location of ML model data to deploy.The Amazon Simple Storage (Amazon S3) location and security configuration forOfflineStore
.An object containing a recommended scaling policy.The metric for a scaling policy.An object where you specify the anticipated traffic pattern for an endpoint.Configuration details about the monitoring schedule.A multi-expression that searches for the specified resource or resources in a search.A single resource returned as part of the Search API response.An array element ofSecondaryStatusTransitions
for DescribeTrainingJob.A step selected to run in selective execution mode.The selective execution configuration applied to the pipeline run.The ARN from an execution of the current pipeline.Details of a provisioned service catalog product.Details that you specify to provision a service catalog product.Details that you specify to provision a service catalog product.The configuration ofShadowMode
inference experiment type, which specifies a production variant to take all the inference requests, and a shadow variant to which Amazon SageMaker replicates a percentage of the inference requests.The name and sampling percentage of a shadow variant.Specifies options for sharing Amazon SageMaker Studio notebooks.A configuration for a shuffle option for input data in a channel.Specifies an algorithm that was used to create the model package.A list of algorithms that were used to create a model package.A list of IP address ranges (CIDRs).The application settings for a Code Editor space.The space's details.The settings for the JupyterLab application within a space.A collection of space settings.Specifies summary information about the space settings.A collection of space sharing settings.Specifies summary information about the space sharing settings.The storage settings for a private space.Defines the stairs traffic pattern for an Inference Recommender load test.Specifies a limit to how long a model training job or model compilation job can run.Details of the Amazon SageMaker Studio Lifecycle Configuration.Describes a work team of a vendor that does the a labelling job.Specified in the GetSearchSuggestions request.The collection of settings used by an AutoML job V2 for the tabular problem type.The resolved attributes specific to the tabular problem type.A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators.A target tracking scaling policy.The TensorBoard app settings.Configuration of storage locations for the Amazon SageMaker Debugger TensorBoard output data.The collection of settings used by an AutoML job V2 for the text classification problem type.The collection of settings used by an AutoML job V2 for the text generation problem type.The resolved attributes specific to the text generation problem type.Used to set feature group throughput configuration.Active throughput configuration of the feature group.The new throughput configuration for the feature group.The collection of components that defines the time-series.The collection of settings used by an AutoML job V2 for the time-series forecasting problem type.Time series forecast settings for the SageMaker Canvas application.Transformations allowed on the dataset.Defines the traffic pattern of the load test.Defines the traffic routing strategy during an endpoint deployment to shift traffic from the old fleet to the new fleet.The configuration to use an image from a private Docker registry for a training job.The training input mode that the algorithm supports.Contains information about a training job.Defines the input needed to run a training job using the algorithm.The numbers of training jobs launched by a hyperparameter tuning job, categorized by status.Metadata for a training job step.Provides summary information about a training job.An object containing authentication information for a private Docker registry.Defines how the algorithm is used for a training job.Describes the location of the channel data.Describes the input source of a transform job and the way the transform job consumes it.A batch transform job.Defines the input needed to run a transform job using the inference specification specified in the algorithm.Metadata for a transform job step.Provides a summary of a transform job.Describes the results of a transform job.Describes the resources, including ML instance types and ML instance count, to use for transform job.Describes the S3 data source.The properties of a trial as returned by the Search API.The properties of a trial component as returned by the Search API.Represents an input or output artifact of a trial component.A summary of the metrics of a trial component.The value of a hyperparameter.A short summary of a trial component.The Amazon Resource Name (ARN) and job type of the source of a trial component.Detailed information about the source of a trial component.The status of the trial component.A summary of the properties of a trial component.The source of the trial.A summary of the properties of a trial.Time to live duration, where the record is hard deleted after the expiration time is reached;ExpiresAt
=EventTime
+TtlDuration
.The job completion criteria.Metadata for a tuning step.Provided configuration information for the worker UI for a labeling job.The Liquid template for the worker user interface.Container for user interface template information.Represents an amount of money in United States dollars.Information about the user who created or modified an experiment, trial, trial component, lineage group, project, or model card.The user profile details.A collection of settings that apply to users in a domain.Specifies a production variant property type for an Endpoint.Configuration for your vector collection type.A lineage entity connected to the starting entity(ies).The list of key-value pairs used to filter your search results.Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.Status and billing information about the warm pool.A single private workforce, which is automatically created when you create your first private work team.The VPC object you use to create or update a workforce.A VpcConfig object that specifies the VPC that you want your workforce to connect to.The workspace settings for the SageMaker Canvas application.Provides details about a labeling work team.