Package org.tribuo.multilabel.evaluation
Class MultiLabelEvaluationImpl
java.lang.Object
org.tribuo.multilabel.evaluation.MultiLabelEvaluationImpl
- All Implemented Interfaces:
com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
,ClassifierEvaluation<MultiLabel>
,Evaluation<MultiLabel>
,MultiLabelEvaluation
The implementation of a
MultiLabelEvaluation
using the default metrics.
The classification metrics consider labels independently.
-
Method Summary
Modifier and TypeMethodDescriptionasMap()
Get a map of all the metrics stored in this evaluation.double
Returns the balanced error rate, i.e., the mean of the per label recalls.double
confusion
(MultiLabel predicted, MultiLabel truth) Returns the number of times labeltruth
was predicted as labelpredicted
.double
f1
(MultiLabel label) Returns the F_1 score, i.e., the harmonic mean of the precision and recall.double
fn()
Returns the micro averaged number of false negatives.double
fn
(MultiLabel label) Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.double
fp()
Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.double
fp
(MultiLabel label) Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..double
get
(MetricID<MultiLabel> key) Gets the value associated with the specific metric.Returns the underlying confusion matrix.Gets the predictions stored in this evaluation.double
The average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.double
Returns the macro averaged F_1 across all the labels.double
Returns the macro averaged precision.double
Returns the macro averaged recall.double
macroFN()
Returns the macro averaged number of false negatives.double
macroFP()
Returns the macro averaged number of false positives, averaged across the labels.double
macroTN()
Returns the macro averaged number of true negatives.double
macroTP()
Returns the macro averaged number of true positives, averaged across the labels.double
Returns the micro averaged F_1 across all labels.double
Returns the micro averaged precision.double
Returns the micro averaged recall.double
precision
(MultiLabel label) Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.double
recall
(MultiLabel label) Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.double
tn()
Returns the total number of true negatives.double
tn
(MultiLabel label) Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.toString()
This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.double
tp()
Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.double
tp
(MultiLabel label) Returns the number of true positives, i.e., the number of times the label was correctly predicted.
-
Method Details
-
getPredictions
Description copied from interface:Evaluation
Gets the predictions stored in this evaluation.- Specified by:
getPredictions
in interfaceEvaluation<MultiLabel>
- Returns:
- The predictions.
-
balancedErrorRate
public double balancedErrorRate()Description copied from interface:ClassifierEvaluation
Returns the balanced error rate, i.e., the mean of the per label recalls.- Specified by:
balancedErrorRate
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The balanced error rate.
-
getConfusionMatrix
Description copied from interface:ClassifierEvaluation
Returns the underlying confusion matrix.- Specified by:
getConfusionMatrix
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The confusion matrix.
-
confusion
Description copied from interface:ClassifierEvaluation
Returns the number of times labeltruth
was predicted as labelpredicted
.- Specified by:
confusion
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
predicted
- The predicted label.truth
- The true label.- Returns:
- The number of times the predicted label was returned for the true label.
-
tp
Description copied from interface:ClassifierEvaluation
Returns the number of true positives, i.e., the number of times the label was correctly predicted.- Specified by:
tp
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The label to calculate.- Returns:
- The number of true positives for that label.
-
tp
public double tp()Description copied from interface:ClassifierEvaluation
Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.- Specified by:
tp
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The micro averaged number of true positives.
-
macroTP
public double macroTP()Description copied from interface:ClassifierEvaluation
Returns the macro averaged number of true positives, averaged across the labels.- Specified by:
macroTP
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged number of true positives.
-
fp
Description copied from interface:ClassifierEvaluation
Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..- Specified by:
fp
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- the label to calculate.- Returns:
- The number of false positives for that label.
-
fp
public double fp()Description copied from interface:ClassifierEvaluation
Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.- Specified by:
fp
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The micro averaged number of false positives.
-
macroFP
public double macroFP()Description copied from interface:ClassifierEvaluation
Returns the macro averaged number of false positives, averaged across the labels.- Specified by:
macroFP
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged number of false positives.
-
tn
Description copied from interface:ClassifierEvaluation
Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.- Specified by:
tn
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The label to use.- Returns:
- the number of true negatives.
-
tn
public double tn()Description copied from interface:ClassifierEvaluation
Returns the total number of true negatives. This isn't very useful in multiclass problems.- Specified by:
tn
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The number of true negatives.
-
macroTN
public double macroTN()Description copied from interface:ClassifierEvaluation
Returns the macro averaged number of true negatives.- Specified by:
macroTN
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged number of true negatives.
-
fn
Description copied from interface:ClassifierEvaluation
Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.- Specified by:
fn
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The true label.- Returns:
- The number of false negatives.
-
fn
public double fn()Description copied from interface:ClassifierEvaluation
Returns the micro averaged number of false negatives.- Specified by:
fn
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The micro averaged number of false negatives.
-
macroFN
public double macroFN()Description copied from interface:ClassifierEvaluation
Returns the macro averaged number of false negatives.- Specified by:
macroFN
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged number of false negatives.
-
precision
Description copied from interface:ClassifierEvaluation
Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.- Specified by:
precision
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The label.- Returns:
- The precision.
-
microAveragedPrecision
public double microAveragedPrecision()Description copied from interface:ClassifierEvaluation
Returns the micro averaged precision.- Specified by:
microAveragedPrecision
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The micro averaged precision.
-
macroAveragedPrecision
public double macroAveragedPrecision()Description copied from interface:ClassifierEvaluation
Returns the macro averaged precision.- Specified by:
macroAveragedPrecision
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged precision.
-
recall
Description copied from interface:ClassifierEvaluation
Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.- Specified by:
recall
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The label.- Returns:
- The recall.
-
microAveragedRecall
public double microAveragedRecall()Description copied from interface:ClassifierEvaluation
Returns the micro averaged recall.- Specified by:
microAveragedRecall
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The micro averaged recall.
-
macroAveragedRecall
public double macroAveragedRecall()Description copied from interface:ClassifierEvaluation
Returns the macro averaged recall.- Specified by:
macroAveragedRecall
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The macro averaged recall.
-
f1
Description copied from interface:ClassifierEvaluation
Returns the F_1 score, i.e., the harmonic mean of the precision and recall.- Specified by:
f1
in interfaceClassifierEvaluation<MultiLabel>
- Parameters:
label
- The label.- Returns:
- The F_1 score.
-
microAveragedF1
public double microAveragedF1()Description copied from interface:ClassifierEvaluation
Returns the micro averaged F_1 across all labels.- Specified by:
microAveragedF1
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The F_1 score.
-
macroAveragedF1
public double macroAveragedF1()Description copied from interface:ClassifierEvaluation
Returns the macro averaged F_1 across all the labels.- Specified by:
macroAveragedF1
in interfaceClassifierEvaluation<MultiLabel>
- Returns:
- The F_1 score.
-
jaccardScore
public double jaccardScore()Description copied from interface:MultiLabelEvaluation
The average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.- Specified by:
jaccardScore
in interfaceMultiLabelEvaluation
- Returns:
- The Jaccard score.
-
asMap
Description copied from interface:Evaluation
Get a map of all the metrics stored in this evaluation. The keys are metric id's and the values are their corresponding computed results.- Specified by:
asMap
in interfaceEvaluation<MultiLabel>
- Returns:
- a map of all stored results
-
getProvenance
- Specified by:
getProvenance
in interfacecom.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
-
toString
This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.
-
get
Description copied from interface:Evaluation
Gets the value associated with the specific metric. ThrowsIllegalArgumentException
if the metric is unknown.- Specified by:
get
in interfaceEvaluation<MultiLabel>
- Parameters:
key
- The metric to lookup.- Returns:
- The value for that metric.
-