Package-level declarations
Types
The request is denied because of missing access permissions.
Information about the agreement availability
The configuration details of an automated evaluation job. The EvaluationDatasetMetricConfig
object is used to specify the prompt datasets, task type, and metric names.
Defines the configuration of custom metrics to be used in an evaluation job. To learn more about using custom metrics in Amazon Bedrock evaluation jobs, see Create a prompt for a custom metrics (LLM-as-a-judge model evaluations) and Create a prompt for a custom metrics (RAG evaluations).
An array item definining a single custom metric for use in an Amazon Bedrock evaluation job.
Represents the result of an Automated Reasoning validation check, indicating whether the content is logically valid, invalid, or falls into other categories based on the policy rules.
Indicates that no valid claims can be made due to logical contradictions in the premises or rules.
References a portion of the original input text that corresponds to logical elements.
Indicates that the claims are logically false and contradictory to the established rules or premises.
Identifies logical issues in the translated statements that exist independent of any policy rules, such as statements that are always true or always false.
Indicates that no relevant logical information could be extracted from the input for validation.
References a specific automated reasoning policy rule that was applied during evaluation.
Indicates that the claims could be either true or false depending on additional assumptions not provided in the input.
Represents a logical scenario where claims can be evaluated as true or false, containing specific logical assignments.
Indicates that the input exceeds the processing capacity due to the volume or complexity of the logical information.
Contains the logical translation of natural language input into formal logical statements, including premises, claims, and confidence scores.
Indicates that the input has multiple valid logical interpretations, requiring additional context or clarification.
Represents one possible logical interpretation of ambiguous input content.
Indicates that the claims are definitively true and logically implied by the premises, with no possible alternative interpretations.
Represents a logical statement that can be expressed both in formal logic notation and natural language, providing dual representations for better understanding and validation.
An annotation for adding a new rule to an Automated Reasoning policy using a formal logical expression.
An annotation for adding a new rule to the policy by converting a natural language description into a formal logical expression.
A mutation operation that adds a new rule to the policy definition during the build process.
An annotation for adding a new custom type to an Automated Reasoning policy, defining a set of possible values for variables.
A mutation operation that adds a new custom type to the policy definition during the build process.
Represents a single value that can be added to an existing custom type in the policy.
An annotation for adding a new variable to an Automated Reasoning policy, which can be used in rule expressions.
A mutation operation that adds a new variable to the policy definition during the build process.
Contains the various operations that can be performed on an Automated Reasoning policy, including adding, updating, and deleting rules, variables, and types.
Contains detailed logging information about the policy build process, including steps taken, decisions made, and any issues encountered.
Represents a single entry in the policy build log, containing information about a specific step or event in the build process.
Contains the various assets generated during a policy build workflow, including logs, quality reports, and the final policy definition.
Represents a single step in the policy build process, containing context about what was being processed and any messages or results.
Provides context about what type of operation was being performed during a build step.
Represents a message generated during a build step, providing information about what happened or any issues encountered.
Represents a source document used in the policy build workflow, containing the content and metadata needed for policy generation.
Contains content and instructions for repairing or improving an existing Automated Reasoning policy.
Defines the source content for a policy build workflow, which can include documents, repair instructions, or other input materials.
Provides a summary of a policy build workflow, including its current status, timing information, and key identifiers.
Contains the formal logic rules, variables, and custom variable types that define an Automated Reasoning policy. The policy definition specifies the constraints used to validate foundation model responses for accuracy and logical consistency.
Represents a single element in an Automated Reasoning policy definition, such as a rule, variable, or type definition.
Provides a comprehensive analysis of the quality and completeness of an Automated Reasoning policy definition, highlighting potential issues and optimization opportunities.
Represents a formal logic rule in an Automated Reasoning policy. For example, rules can be expressed as if-then statements that define logical constraints.
Represents a custom user-defined viarble type in an Automated Reasoning policy. Types are enum-based and provide additional context beyond predefined variable types.
Represents a single value within a custom type definition, including its identifier and description.
Associates a type name with a specific value name, used for referencing type values in rules and other policy elements.
Represents a variable in an Automated Reasoning policy. Variables represent concepts that can have values assigned during natural language translation.
An annotation for removing a rule from an Automated Reasoning policy.
A mutation operation that removes a rule from the policy definition during the build process.
An annotation for removing a custom type from an Automated Reasoning policy.
A mutation operation that removes a custom type from the policy definition during the build process.
Represents a value to be removed from an existing custom type in the policy.
An annotation for removing a variable from an Automated Reasoning policy.
A mutation operation that removes a variable from the policy definition during the build process.
Represents a set of rules that operate on completely separate variables, indicating they address different concerns or domains within the policy.
An annotation for processing and incorporating new content into an Automated Reasoning policy.
A container for various mutation operations that can be applied to an Automated Reasoning policy, including adding, updating, and deleting policy elements.
Represents the planning phase of policy build workflow, where the system analyzes source content and determines what operations to perform.
Represents a test scenario used to validate an Automated Reasoning policy, including the test conditions and expected outcomes.
Contains summary information about an Automated Reasoning policy, including metadata and timestamps.
Represents a test for validating an Automated Reasoning policy. tests contain sample inputs and expected outcomes to verify policy behavior.
Contains the results of testing an Automated Reasoning policy against various scenarios and validation checks.
An annotation for managing values within custom types, including adding, updating, or removing specific type values.
An annotation for updating the policy based on feedback about how specific rules performed during testing or real-world usage.
An annotation for updating the policy based on feedback about how it performed on specific test scenarios.
An annotation for modifying an existing rule in an Automated Reasoning policy.
A mutation operation that modifies an existing rule in the policy definition during the build process.
An annotation for modifying an existing custom type in an Automated Reasoning policy.
A mutation operation that modifies an existing custom type in the policy definition during the build process.
Represents a modification to a value within an existing custom type.
An annotation for modifying an existing variable in an Automated Reasoning policy.
A mutation operation that modifies an existing variable in the policy definition during the build process.
Defines the content and configuration for different types of policy build workflows.
A JSON array that provides the status of the evaluation jobs being deleted.
An evaluation job for deletion, and it’s current status.
The evaluator model used in knowledge base evaluation job or in model evaluation job that use a model as judge. This model computes all evaluation related metrics.
Base class for all service related exceptions thrown by the Bedrock client
Contains the document contained in the wrapper object, along with its attributes/fields.
CloudWatch logging configuration.
Error occurred because of a conflict while performing an operation.
A model customization configuration
Defines the model you want to evaluate custom metrics in an Amazon Bedrock evaluation job.
The definition of a custom metric for use in an Amazon Bedrock evaluation job. A custom metric definition includes a metric name, prompt (instructions) and optionally, a rating scale. Your prompt must include a task description and input variables. The required input variables are different for model-as-a-judge and RAG evaluations.
Configuration of the evaluator model you want to use to evaluate custom metrics in an Amazon Bedrock evaluation job.
Contains summary information about a custom model deployment, including its ARN, name, status, and associated custom model.
Summary information for a custom model.
A CustomModelUnit
(CMU) is an abstract view of the hardware utilization that Amazon Bedrock needs to host a single copy of your custom model. A model copy represents a single instance of your imported model that is ready to serve inference requests. Amazon Bedrock determines the number of custom model units that a model copy needs when you import the custom model.
For a Distillation job, the status details for the data processing sub-task of the job.
Dimensional price rate.
Settings for distilling a foundation model into a smaller and more efficient model.
Specifies the configuration for the endpoint.
Contains the ARN of the Amazon Bedrock model or inference profile specified in your evaluation job. Each Amazon Bedrock model supports different inferenceParams
. To learn more about supported inference parameters for Amazon Bedrock models, see Inference parameters for foundation models.
The configuration details of either an automated or human-based evaluation job.
Used to specify the name of a built-in prompt dataset and optionally, the Amazon S3 bucket where a custom prompt dataset is saved.
The location in Amazon S3 where your prompt dataset is stored.
Defines the prompt datasets, built-in metric names and custom metric names, and the task type.
The configuration details of the inference model for an evaluation job.
Identifies the models, Knowledge Bases, or other RAG sources evaluated in a model or Knowledge Base evaluation job.
Defines the models used in the model evaluation job.
A summary of the models used in an Amazon Bedrock model evaluation job. These resources can be models in Amazon Bedrock or models outside of Amazon Bedrock that you use to generate your own inference response data.
The Amazon S3 location where the results of your evaluation job are saved.
A summary of a model used for a model evaluation job where you provide your own inference response data.
A summary of a RAG source used for a Knowledge Base evaluation job where you provide your own inference response data.
A summary of a RAG source used for a retrieve-and-generate Knowledge Base evaluation job where you provide your own inference response data.
A summary of a RAG source used for a retrieve-only Knowledge Base evaluation job where you provide your own inference response data.
A summary of the RAG resources used in an Amazon Bedrock Knowledge Base evaluation job. These resources can be Knowledge Bases in Amazon Bedrock or RAG sources outside of Amazon Bedrock that you use to generate your own inference response data.
Summary information of an evaluation job.
Specifies the model configuration for the evaluator model. EvaluatorModelConfig
is required for evaluation jobs that use a knowledge base or in model evaluation job that use a model as judge. This model computes all evaluation related metrics.
The unique external source of the content contained in the wrapper object.
The response generation configuration of the external source wrapper object.
The configuration of the external source wrapper object in the retrieveAndGenerate
function.
Specifies a field to be used during the reranking process in a Knowledge Base vector search. This structure identifies metadata fields that should be considered when reordering search results to improve relevance.
Specifies the name of the metadata attribute/field to apply filters. You must match the name of the attribute/field in your data source/document metadata.
Information about a foundation model.
Details about whether a model version is available or deprecated.
Summary information for a foundation model.
The configuration details for response generation based on retrieved text chunks.
Represents the configuration of Automated Reasoning policies within a Amazon Bedrock Guardrail, including the policies to apply and confidence thresholds.
Configuration settings for integrating Automated Reasoning policies with Amazon Bedrock Guardrails.
The configuration details for the guardrail.
Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
The tier that your guardrail uses for content filters.
The tier that your guardrail uses for content filters. Consider using a tier that balances performance, accuracy, and compatibility with your existing generative AI workflows.
Contains details about how to handle harmful content.
Contains details about how to handle harmful content.
The details for the guardrails contextual grounding filter.
The filter configuration details for the guardrails contextual grounding filter.
The details for the guardrails contextual grounding policy.
The policy configuration details for the guardrails contextual grounding policy.
The system-defined guardrail profile that you're using with your guardrail. Guardrail profiles define the destination Amazon Web Services Regions where guardrail inference requests can be automatically routed. Using guardrail profiles helps maintain guardrail performance and reliability when demand increases.
Contains details about the system-defined guardrail profile that you're using with your guardrail for cross-Region inference.
The managed word list that was configured for the guardrail. (This is a list of words that are pre-defined and managed by guardrails only.)
The managed word list to configure for the guardrail.
The PII entity configured for the guardrail.
The PII entity to configure for the guardrail.
The regular expression configured for the guardrail.
The regular expression to configure for the guardrail.
Contains details about PII entities and regular expressions configured for the guardrail.
Contains details about PII entities and regular expressions to configure for the guardrail.
Contains details about a guardrail.
Details about topics for the guardrail to identify and deny.
Details about topics for the guardrail to identify and deny.
Contains details about topics that the guardrail should identify and deny.
Contains details about topics that the guardrail should identify and deny.
The tier that your guardrail uses for denied topic filters.
The tier that your guardrail uses for denied topic filters. Consider using a tier that balances performance, accuracy, and compatibility with your existing generative AI workflows.
A word configured for the guardrail.
A word to configure for the guardrail.
Contains details about the word policy configured for the guardrail.
Contains details about the word policy to configured for the guardrail.
Specifies the custom metrics, how tasks will be rated, the flow definition ARN, and your custom prompt datasets. Model evaluation jobs use human workers only support the use of custom prompt datasets. To learn more about custom prompt datasets and the required format, see Custom prompt datasets.
In a model evaluation job that uses human workers you must define the name of the metric, and how you want that metric rated ratingMethod
, and an optional description of the metric.
Contains SageMakerFlowDefinition
object. The object is used to specify the prompt dataset, task type, rating method and metric names.
Configuration for implicit filtering in Knowledge Base vector searches. Implicit filtering allows you to automatically filter search results based on metadata attributes without requiring explicit filter expressions in each query.
Information about the imported model.
Contains information about a model.
Contains information about the model or system-defined inference profile that is the source for an inference profile..
Contains information about an inference profile.
An internal server error occurred. Retry your request.
Settings for using invocation logs to customize a model.
A storage location for invocation logs.
Contains configuration details of the inference for knowledge base retrieval and response generation.
The configuration details for retrieving information from a knowledge base and generating responses.
Contains configuration details for retrieving information from a knowledge base.
Contains configuration details for retrieving information from a knowledge base and generating responses.
The configuration details for returning the results from the knowledge base vector search.
Configuration fields for invocation logging.
Contains details about an endpoint for a model from Amazon Bedrock Marketplace.
Provides a summary of an endpoint for a model from Amazon Bedrock Marketplace.
Defines the schema for a metadata attribute used in Knowledge Base vector searches. Metadata attributes provide additional context for documents and can be used for filtering and reranking search results.
Configuration for how metadata should be used during the reranking process in Knowledge Base vector searches. This determines which metadata fields are included or excluded when reordering search results.
Contains details about each model copy job.
Information about one customization job
The data source of the model to import.
Information about the import job.
Details about the location of the input to the batch inference job.
Contains the configuration of the S3 location of the output data.
Contains the configuration of the S3 location of the input data.
Contains the configuration of the S3 location of the output data.
A summary of a batch inference job.
The configuration details for the model to process the prompt prior to retrieval and response generation.
S3 Location of the output data.
Contains performance settings for a model.
Describes the usage-based pricing term.
Details about a prompt router.
The target model for a prompt router.
The template for the prompt that's sent to the model for response generation.
A summary of information about a Provisioned Throughput.
The configuration details for transforming the prompt.
Defines the value and corresponding definition for one rating in a custom metric rating scale.
Defines the value for one rating in a custom metric rating scale.
A mapping of a metadata key to a value that it should or should not equal.
Rules for filtering invocation logs. A filter can be a mapping of a metadata key to a value that it should or should not equal (a base filter), or a list of base filters that are all applied with AND
or OR
logical operators
Configuration for selectively including or excluding metadata fields during the reranking process. This allows you to control which metadata attributes are considered when reordering search results.
Thrown when attempting to delete or modify a resource that is currently being used by other resources or operations. For example, trying to delete an Automated Reasoning policy that is referenced by an active guardrail.
The specified resource Amazon Resource Name (ARN) was not found. Check the Amazon Resource Name (ARN) and try your request again.
Specifies the filters to use on the metadata attributes/fields in the knowledge base data sources before returning results.
Contains configuration details for a knowledge base retrieval and response generation.
The configuration details for retrieving information from a knowledge base.
Routing criteria for a prompt router.
The Amazon S3 data source of the model to import.
The unique wrapper object of the document from the S3 location.
Specifies the configuration for a Amazon SageMaker endpoint.
The number of requests exceeds the service quota. Resubmit your request later.
Returned if the service cannot complete the request.
For a Distillation job, the status details for sub-tasks of the job. Possible statuses for each sub-task include the following:
Describes a support term.
Details about a teacher model used for model customization.
Describes the usage terms of an offer.
The configuration details for text generation using a language model via the RetrieveAndGenerate
function.
The number of requests exceeds the limit. Resubmit your request later.
The request contains more tags than can be associated with a resource (50 tags per resource). The maximum number of tags includes both existing tags and those included in your current request.
S3 Location of the training data.
For a Distillation job, the status details for the training sub-task of the job.
Metrics associated with the custom job.
Array of up to 10 validators.
For a Distillation job, the status details for the validation sub-task of the job.
Input validation failed. Check your request parameters and retry the request.
The metric for the validator.
Describes the validity terms.
Configuration for using Amazon Bedrock foundation models to rerank Knowledge Base vector search results. This enables more sophisticated relevance ranking using large language models.
Configuration for the Amazon Bedrock foundation model used for reranking vector search results. This specifies which model to use and any additional parameters required by the model.
Configuration for reranking vector search results to improve relevance. Reranking applies additional relevance models to reorder the initial vector search results based on more sophisticated criteria.
The configuration of a virtual private cloud (VPC). For more information, see Protect your data using Amazon Virtual Private Cloud and Amazon Web Services PrivateLink.