Interface EvaluationDatasetMetricConfig.Builder
- All Superinterfaces:
Buildable
,CopyableBuilder<EvaluationDatasetMetricConfig.Builder,
,EvaluationDatasetMetricConfig> SdkBuilder<EvaluationDatasetMetricConfig.Builder,
,EvaluationDatasetMetricConfig> SdkPojo
- Enclosing class:
EvaluationDatasetMetricConfig
-
Method Summary
Modifier and TypeMethodDescriptiondataset
(Consumer<EvaluationDataset.Builder> dataset) Specifies the prompt dataset.dataset
(EvaluationDataset dataset) Specifies the prompt dataset.metricNames
(String... metricNames) The names of the metrics you want to use for your evaluation job.metricNames
(Collection<String> metricNames) The names of the metrics you want to use for your evaluation job.The the type of task you want to evaluate for your evaluation job.taskType
(EvaluationTaskType taskType) The the type of task you want to evaluate for your evaluation job.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFieldNameToField, sdkFields
-
Method Details
-
taskType
The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.
- Parameters:
taskType
- The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
taskType
The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.
- Parameters:
taskType
- The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
dataset
Specifies the prompt dataset.
- Parameters:
dataset
- Specifies the prompt dataset.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataset
Specifies the prompt dataset.
This is a convenience method that creates an instance of theEvaluationDataset.Builder
avoiding the need to create one manually viaEvaluationDataset.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed todataset(EvaluationDataset)
.- Parameters:
dataset
- a consumer that will call methods onEvaluationDataset.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
metricNames
The names of the metrics you want to use for your evaluation job.
For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.- Parameters:
metricNames
- The names of the metrics you want to use for your evaluation job.For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
metricNames
The names of the metrics you want to use for your evaluation job.
For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.- Parameters:
metricNames
- The names of the metrics you want to use for your evaluation job.For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-