Package-level declarations
Types
The request is denied because of missing access permissions.
Information about the agreement availability
The configuration details of an automated evaluation job. The EvaluationDatasetMetricConfig
object is used to specify the prompt datasets, task type, and metric names.
Defines the configuration of custom metrics to be used in an evaluation job. To learn more about using custom metrics in Amazon Bedrock evaluation jobs, see Create a prompt for a custom metrics (LLM-as-a-judge model evaluations) and Create a prompt for a custom metrics (RAG evaluations).
An array item definining a single custom metric for use in an Amazon Bedrock evaluation job.
A JSON array that provides the status of the evaluation jobs being deleted.
An evaluation job for deletion, and it’s current status.
The evaluator model used in knowledge base evaluation job or in model evaluation job that use a model as judge. This model computes all evaluation related metrics.
Base class for all service related exceptions thrown by the Bedrock client
Contains the document contained in the wrapper object, along with its attributes/fields.
CloudWatch logging configuration.
Error occurred because of a conflict while performing an operation.
A model customization configuration
Defines the model you want to evaluate custom metrics in an Amazon Bedrock evaluation job.
The definition of a custom metric for use in an Amazon Bedrock evaluation job. A custom metric definition includes a metric name, prompt (instructions) and optionally, a rating scale. Your prompt must include a task description and input variables. The required input variables are different for model-as-a-judge and RAG evaluations.
Configuration of the evaluator model you want to use to evaluate custom metrics in an Amazon Bedrock evaluation job.
Contains summary information about a custom model deployment, including its ARN, name, status, and associated custom model.
Summary information for a custom model.
A CustomModelUnit
(CMU) is an abstract view of the hardware utilization that Amazon Bedrock needs to host a single copy of your custom model. A model copy represents a single instance of your imported model that is ready to serve inference requests. Amazon Bedrock determines the number of custom model units that a model copy needs when you import the custom model.
For a Distillation job, the status details for the data processing sub-task of the job.
Dimensional price rate.
Settings for distilling a foundation model into a smaller and more efficient model.
Specifies the configuration for the endpoint.
Contains the ARN of the Amazon Bedrock model or inference profile specified in your evaluation job. Each Amazon Bedrock model supports different inferenceParams
. To learn more about supported inference parameters for Amazon Bedrock models, see Inference parameters for foundation models.
The configuration details of either an automated or human-based evaluation job.
Used to specify the name of a built-in prompt dataset and optionally, the Amazon S3 bucket where a custom prompt dataset is saved.
The location in Amazon S3 where your prompt dataset is stored.
Defines the prompt datasets, built-in metric names and custom metric names, and the task type.
The configuration details of the inference model for an evaluation job.
Identifies the models, Knowledge Bases, or other RAG sources evaluated in a model or Knowledge Base evaluation job.
Defines the models used in the model evaluation job.
A summary of the models used in an Amazon Bedrock model evaluation job. These resources can be models in Amazon Bedrock or models outside of Amazon Bedrock that you use to generate your own inference response data.
The Amazon S3 location where the results of your evaluation job are saved.
A summary of a model used for a model evaluation job where you provide your own inference response data.
A summary of a RAG source used for a Knowledge Base evaluation job where you provide your own inference response data.
A summary of a RAG source used for a retrieve-and-generate Knowledge Base evaluation job where you provide your own inference response data.
A summary of a RAG source used for a retrieve-only Knowledge Base evaluation job where you provide your own inference response data.
A summary of the RAG resources used in an Amazon Bedrock Knowledge Base evaluation job. These resources can be Knowledge Bases in Amazon Bedrock or RAG sources outside of Amazon Bedrock that you use to generate your own inference response data.
Summary information of an evaluation job.
Specifies the model configuration for the evaluator model. EvaluatorModelConfig
is required for evaluation jobs that use a knowledge base or in model evaluation job that use a model as judge. This model computes all evaluation related metrics.
The unique external source of the content contained in the wrapper object.
The response generation configuration of the external source wrapper object.
The configuration of the external source wrapper object in the retrieveAndGenerate
function.
Specifies a field to be used during the reranking process in a Knowledge Base vector search. This structure identifies metadata fields that should be considered when reordering search results to improve relevance.
Specifies the name of the metadata attribute/field to apply filters. You must match the name of the attribute/field in your data source/document metadata.
Information about a foundation model.
Details about whether a model version is available or deprecated.
Summary information for a foundation model.
The configuration details for response generation based on retrieved text chunks.
The configuration details for the guardrail.
Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
The tier that your guardrail uses for content filters.
The tier that your guardrail uses for content filters. Consider using a tier that balances performance, accuracy, and compatibility with your existing generative AI workflows.
Contains details about how to handle harmful content.
Contains details about how to handle harmful content.
The details for the guardrails contextual grounding filter.
The filter configuration details for the guardrails contextual grounding filter.
The details for the guardrails contextual grounding policy.
The policy configuration details for the guardrails contextual grounding policy.
The system-defined guardrail profile that you're using with your guardrail. Guardrail profiles define the destination Amazon Web Services Regions where guardrail inference requests can be automatically routed. Using guardrail profiles helps maintain guardrail performance and reliability when demand increases.
Contains details about the system-defined guardrail profile that you're using with your guardrail for cross-Region inference.
The managed word list that was configured for the guardrail. (This is a list of words that are pre-defined and managed by guardrails only.)
The managed word list to configure for the guardrail.
The PII entity configured for the guardrail.
The PII entity to configure for the guardrail.
The regular expression configured for the guardrail.
The regular expression to configure for the guardrail.
Contains details about PII entities and regular expressions configured for the guardrail.
Contains details about PII entities and regular expressions to configure for the guardrail.
Contains details about a guardrail.
Details about topics for the guardrail to identify and deny.
Details about topics for the guardrail to identify and deny.
Contains details about topics that the guardrail should identify and deny.
Contains details about topics that the guardrail should identify and deny.
The tier that your guardrail uses for denied topic filters.
The tier that your guardrail uses for denied topic filters. Consider using a tier that balances performance, accuracy, and compatibility with your existing generative AI workflows.
A word configured for the guardrail.
A word to configure for the guardrail.
Contains details about the word policy configured for the guardrail.
Contains details about the word policy to configured for the guardrail.
Specifies the custom metrics, how tasks will be rated, the flow definition ARN, and your custom prompt datasets. Model evaluation jobs use human workers only support the use of custom prompt datasets. To learn more about custom prompt datasets and the required format, see Custom prompt datasets.
In a model evaluation job that uses human workers you must define the name of the metric, and how you want that metric rated ratingMethod
, and an optional description of the metric.
Contains SageMakerFlowDefinition
object. The object is used to specify the prompt dataset, task type, rating method and metric names.
Configuration for implicit filtering in Knowledge Base vector searches. Implicit filtering allows you to automatically filter search results based on metadata attributes without requiring explicit filter expressions in each query.
Information about the imported model.
Contains information about a model.
Contains information about the model or system-defined inference profile that is the source for an inference profile..
Contains information about an inference profile.
An internal server error occurred. Retry your request.
Settings for using invocation logs to customize a model.
A storage location for invocation logs.
Contains configuration details of the inference for knowledge base retrieval and response generation.
The configuration details for retrieving information from a knowledge base and generating responses.
Contains configuration details for retrieving information from a knowledge base.
Contains configuration details for retrieving information from a knowledge base and generating responses.
The configuration details for returning the results from the knowledge base vector search.
Configuration fields for invocation logging.
Contains details about an endpoint for a model from Amazon Bedrock Marketplace.
Provides a summary of an endpoint for a model from Amazon Bedrock Marketplace.
Defines the schema for a metadata attribute used in Knowledge Base vector searches. Metadata attributes provide additional context for documents and can be used for filtering and reranking search results.
Configuration for how metadata should be used during the reranking process in Knowledge Base vector searches. This determines which metadata fields are included or excluded when reordering search results.
Contains details about each model copy job.
Information about one customization job
The data source of the model to import.
Information about the import job.
Details about the location of the input to the batch inference job.
Contains the configuration of the S3 location of the output data.
Contains the configuration of the S3 location of the input data.
Contains the configuration of the S3 location of the output data.
A summary of a batch inference job.
The configuration details for the model to process the prompt prior to retrieval and response generation.
S3 Location of the output data.
Contains performance settings for a model.
Describes the usage-based pricing term.
Details about a prompt router.
The target model for a prompt router.
The template for the prompt that's sent to the model for response generation.
A summary of information about a Provisioned Throughput.
The configuration details for transforming the prompt.
Defines the value and corresponding definition for one rating in a custom metric rating scale.
Defines the value for one rating in a custom metric rating scale.
A mapping of a metadata key to a value that it should or should not equal.
Rules for filtering invocation logs. A filter can be a mapping of a metadata key to a value that it should or should not equal (a base filter), or a list of base filters that are all applied with AND
or OR
logical operators
Configuration for selectively including or excluding metadata fields during the reranking process. This allows you to control which metadata attributes are considered when reordering search results.
The specified resource Amazon Resource Name (ARN) was not found. Check the Amazon Resource Name (ARN) and try your request again.
Specifies the filters to use on the metadata attributes/fields in the knowledge base data sources before returning results.
Contains configuration details for a knowledge base retrieval and response generation.
The configuration details for retrieving information from a knowledge base.
Routing criteria for a prompt router.
The Amazon S3 data source of the model to import.
The unique wrapper object of the document from the S3 location.
Specifies the configuration for a Amazon SageMaker endpoint.
The number of requests exceeds the service quota. Resubmit your request later.
Returned if the service cannot complete the request.
For a Distillation job, the status details for sub-tasks of the job. Possible statuses for each sub-task include the following:
Describes a support term.
Details about a teacher model used for model customization.
Describes the usage terms of an offer.
The configuration details for text generation using a language model via the RetrieveAndGenerate
function.
The number of requests exceeds the limit. Resubmit your request later.
The request contains more tags than can be associated with a resource (50 tags per resource). The maximum number of tags includes both existing tags and those included in your current request.
S3 Location of the training data.
For a Distillation job, the status details for the training sub-task of the job.
Metrics associated with the custom job.
Array of up to 10 validators.
For a Distillation job, the status details for the validation sub-task of the job.
Input validation failed. Check your request parameters and retry the request.
The metric for the validator.
Describes the validity terms.
Configuration for using Amazon Bedrock foundation models to rerank Knowledge Base vector search results. This enables more sophisticated relevance ranking using large language models.
Configuration for the Amazon Bedrock foundation model used for reranking vector search results. This specifies which model to use and any additional parameters required by the model.
Configuration for reranking vector search results to improve relevance. Reranking applies additional relevance models to reorder the initial vector search results based on more sophisticated criteria.
The configuration of a virtual private cloud (VPC). For more information, see Protect your data using Amazon Virtual Private Cloud and Amazon Web Services PrivateLink.