AWS SDK for C++  1.8.95
AWS SDK for C++
Public Member Functions | List of all members
Aws::Comprehend::Model::ClassifierEvaluationMetrics Class Reference

#include <ClassifierEvaluationMetrics.h>

Public Member Functions

 ClassifierEvaluationMetrics ()
 
 ClassifierEvaluationMetrics (Aws::Utils::Json::JsonView jsonValue)
 
ClassifierEvaluationMetricsoperator= (Aws::Utils::Json::JsonView jsonValue)
 
Aws::Utils::Json::JsonValue Jsonize () const
 
double GetAccuracy () const
 
bool AccuracyHasBeenSet () const
 
void SetAccuracy (double value)
 
ClassifierEvaluationMetricsWithAccuracy (double value)
 
double GetPrecision () const
 
bool PrecisionHasBeenSet () const
 
void SetPrecision (double value)
 
ClassifierEvaluationMetricsWithPrecision (double value)
 
double GetRecall () const
 
bool RecallHasBeenSet () const
 
void SetRecall (double value)
 
ClassifierEvaluationMetricsWithRecall (double value)
 
double GetF1Score () const
 
bool F1ScoreHasBeenSet () const
 
void SetF1Score (double value)
 
ClassifierEvaluationMetricsWithF1Score (double value)
 
double GetMicroPrecision () const
 
bool MicroPrecisionHasBeenSet () const
 
void SetMicroPrecision (double value)
 
ClassifierEvaluationMetricsWithMicroPrecision (double value)
 
double GetMicroRecall () const
 
bool MicroRecallHasBeenSet () const
 
void SetMicroRecall (double value)
 
ClassifierEvaluationMetricsWithMicroRecall (double value)
 
double GetMicroF1Score () const
 
bool MicroF1ScoreHasBeenSet () const
 
void SetMicroF1Score (double value)
 
ClassifierEvaluationMetricsWithMicroF1Score (double value)
 
double GetHammingLoss () const
 
bool HammingLossHasBeenSet () const
 
void SetHammingLoss (double value)
 
ClassifierEvaluationMetricsWithHammingLoss (double value)
 

Detailed Description

Describes the result metrics for the test data associated with an documentation classifier.

See Also:

AWS API Reference

Definition at line 30 of file ClassifierEvaluationMetrics.h.

Constructor & Destructor Documentation

◆ ClassifierEvaluationMetrics() [1/2]

Aws::Comprehend::Model::ClassifierEvaluationMetrics::ClassifierEvaluationMetrics ( )

◆ ClassifierEvaluationMetrics() [2/2]

Aws::Comprehend::Model::ClassifierEvaluationMetrics::ClassifierEvaluationMetrics ( Aws::Utils::Json::JsonView  jsonValue)

Member Function Documentation

◆ AccuracyHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::AccuracyHasBeenSet ( ) const
inline

The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.

Definition at line 51 of file ClassifierEvaluationMetrics.h.

◆ F1ScoreHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::F1ScoreHasBeenSet ( ) const
inline

A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 136 of file ClassifierEvaluationMetrics.h.

◆ GetAccuracy()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetAccuracy ( ) const
inline

The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.

Definition at line 44 of file ClassifierEvaluationMetrics.h.

◆ GetF1Score()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetF1Score ( ) const
inline

A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 128 of file ClassifierEvaluationMetrics.h.

◆ GetHammingLoss()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetHammingLoss ( ) const
inline

Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.

Definition at line 275 of file ClassifierEvaluationMetrics.h.

◆ GetMicroF1Score()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetMicroF1Score ( ) const
inline

A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision and Micro Recall values. The Micro F1Score is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 243 of file ClassifierEvaluationMetrics.h.

◆ GetMicroPrecision()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetMicroPrecision ( ) const
inline

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.

Definition at line 162 of file ClassifierEvaluationMetrics.h.

◆ GetMicroRecall()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetMicroRecall ( ) const
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.

Definition at line 201 of file ClassifierEvaluationMetrics.h.

◆ GetPrecision()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetPrecision ( ) const
inline

A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.

Definition at line 73 of file ClassifierEvaluationMetrics.h.

◆ GetRecall()

double Aws::Comprehend::Model::ClassifierEvaluationMetrics::GetRecall ( ) const
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.

Definition at line 101 of file ClassifierEvaluationMetrics.h.

◆ HammingLossHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::HammingLossHasBeenSet ( ) const
inline

Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.

Definition at line 282 of file ClassifierEvaluationMetrics.h.

◆ Jsonize()

Aws::Utils::Json::JsonValue Aws::Comprehend::Model::ClassifierEvaluationMetrics::Jsonize ( ) const

◆ MicroF1ScoreHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::MicroF1ScoreHasBeenSet ( ) const
inline

A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision and Micro Recall values. The Micro F1Score is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 251 of file ClassifierEvaluationMetrics.h.

◆ MicroPrecisionHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::MicroPrecisionHasBeenSet ( ) const
inline

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.

Definition at line 171 of file ClassifierEvaluationMetrics.h.

◆ MicroRecallHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::MicroRecallHasBeenSet ( ) const
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.

Definition at line 212 of file ClassifierEvaluationMetrics.h.

◆ operator=()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::operator= ( Aws::Utils::Json::JsonView  jsonValue)

◆ PrecisionHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::PrecisionHasBeenSet ( ) const
inline

A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.

Definition at line 80 of file ClassifierEvaluationMetrics.h.

◆ RecallHasBeenSet()

bool Aws::Comprehend::Model::ClassifierEvaluationMetrics::RecallHasBeenSet ( ) const
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.

Definition at line 107 of file ClassifierEvaluationMetrics.h.

◆ SetAccuracy()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetAccuracy ( double  value)
inline

The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.

Definition at line 58 of file ClassifierEvaluationMetrics.h.

◆ SetF1Score()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetF1Score ( double  value)
inline

A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 144 of file ClassifierEvaluationMetrics.h.

◆ SetHammingLoss()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetHammingLoss ( double  value)
inline

Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.

Definition at line 289 of file ClassifierEvaluationMetrics.h.

◆ SetMicroF1Score()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetMicroF1Score ( double  value)
inline

A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision and Micro Recall values. The Micro F1Score is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 259 of file ClassifierEvaluationMetrics.h.

◆ SetMicroPrecision()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetMicroPrecision ( double  value)
inline

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.

Definition at line 180 of file ClassifierEvaluationMetrics.h.

◆ SetMicroRecall()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetMicroRecall ( double  value)
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.

Definition at line 223 of file ClassifierEvaluationMetrics.h.

◆ SetPrecision()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetPrecision ( double  value)
inline

A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.

Definition at line 87 of file ClassifierEvaluationMetrics.h.

◆ SetRecall()

void Aws::Comprehend::Model::ClassifierEvaluationMetrics::SetRecall ( double  value)
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.

Definition at line 113 of file ClassifierEvaluationMetrics.h.

◆ WithAccuracy()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithAccuracy ( double  value)
inline

The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.

Definition at line 65 of file ClassifierEvaluationMetrics.h.

◆ WithF1Score()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithF1Score ( double  value)
inline

A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 152 of file ClassifierEvaluationMetrics.h.

◆ WithHammingLoss()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithHammingLoss ( double  value)
inline

Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.

Definition at line 296 of file ClassifierEvaluationMetrics.h.

◆ WithMicroF1Score()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithMicroF1Score ( double  value)
inline

A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision and Micro Recall values. The Micro F1Score is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.

Definition at line 267 of file ClassifierEvaluationMetrics.h.

◆ WithMicroPrecision()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithMicroPrecision ( double  value)
inline

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.

Definition at line 189 of file ClassifierEvaluationMetrics.h.

◆ WithMicroRecall()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithMicroRecall ( double  value)
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.

Definition at line 234 of file ClassifierEvaluationMetrics.h.

◆ WithPrecision()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithPrecision ( double  value)
inline

A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.

Definition at line 94 of file ClassifierEvaluationMetrics.h.

◆ WithRecall()

ClassifierEvaluationMetrics& Aws::Comprehend::Model::ClassifierEvaluationMetrics::WithRecall ( double  value)
inline

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.

Definition at line 119 of file ClassifierEvaluationMetrics.h.


The documentation for this class was generated from the following file: