Class MultiLabelEvaluationImpl
java.lang.Object
org.tribuo.multilabel.evaluation.MultiLabelEvaluationImpl
- All Implemented Interfaces:
com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>,ClassifierEvaluation<MultiLabel>,Evaluation<MultiLabel>,MultiLabelEvaluation
The implementation of a
MultiLabelEvaluation using the default metrics.
The classification metrics consider labels independently.
-
Method Summary
Modifier and TypeMethodDescriptionasMap()Get a map of all the metrics stored in this evaluation.doubleReturns the balanced error rate, i.e., the mean of the per label recalls.doubleconfusion(MultiLabel predicted, MultiLabel truth) Returns the number of times labeltruthwas predicted as labelpredicted.doublef1(MultiLabel label) Returns the F_1 score, i.e., the harmonic mean of the precision and recall.doublefn()Returns the micro averaged number of false negatives.doublefn(MultiLabel label) Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.doublefp()Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.doublefp(MultiLabel label) Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..doubleget(MetricID<MultiLabel> key) Gets the value associated with the specific metric.Returns the underlying confusion matrix.Gets the predictions stored in this evaluation.doubleThe average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.doubleReturns the macro averaged F_1 across all the labels.doubleReturns the macro averaged precision.doubleReturns the macro averaged recall.doublemacroFN()Returns the macro averaged number of false negatives.doublemacroFP()Returns the macro averaged number of false positives, averaged across the labels.doublemacroTN()Returns the macro averaged number of true negatives.doublemacroTP()Returns the macro averaged number of true positives, averaged across the labels.doubleReturns the micro averaged F_1 across all labels.doubleReturns the micro averaged precision.doubleReturns the micro averaged recall.doubleprecision(MultiLabel label) Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.doublerecall(MultiLabel label) Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.doubletn()Returns the total number of true negatives.doubletn(MultiLabel label) Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.toString()This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.doubletp()Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.doubletp(MultiLabel label) Returns the number of true positives, i.e., the number of times the label was correctly predicted.
-
Method Details
-
getPredictions
Description copied from interface:EvaluationGets the predictions stored in this evaluation.- Specified by:
getPredictionsin interfaceEvaluation<MultiLabel>- Returns:
- The predictions.
-
balancedErrorRate
public double balancedErrorRate()Description copied from interface:ClassifierEvaluationReturns the balanced error rate, i.e., the mean of the per label recalls.- Specified by:
balancedErrorRatein interfaceClassifierEvaluation<MultiLabel>- Returns:
- The balanced error rate.
-
getConfusionMatrix
Description copied from interface:ClassifierEvaluationReturns the underlying confusion matrix.- Specified by:
getConfusionMatrixin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The confusion matrix.
-
confusion
Description copied from interface:ClassifierEvaluationReturns the number of times labeltruthwas predicted as labelpredicted.- Specified by:
confusionin interfaceClassifierEvaluation<MultiLabel>- Parameters:
predicted- The predicted label.truth- The true label.- Returns:
- The number of times the predicted label was returned for the true label.
-
tp
Description copied from interface:ClassifierEvaluationReturns the number of true positives, i.e., the number of times the label was correctly predicted.- Specified by:
tpin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The label to calculate.- Returns:
- The number of true positives for that label.
-
tp
public double tp()Description copied from interface:ClassifierEvaluationReturns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.- Specified by:
tpin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The micro averaged number of true positives.
-
macroTP
public double macroTP()Description copied from interface:ClassifierEvaluationReturns the macro averaged number of true positives, averaged across the labels.- Specified by:
macroTPin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged number of true positives.
-
fp
Description copied from interface:ClassifierEvaluationReturns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..- Specified by:
fpin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- the label to calculate.- Returns:
- The number of false positives for that label.
-
fp
public double fp()Description copied from interface:ClassifierEvaluationReturns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.- Specified by:
fpin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The micro averaged number of false positives.
-
macroFP
public double macroFP()Description copied from interface:ClassifierEvaluationReturns the macro averaged number of false positives, averaged across the labels.- Specified by:
macroFPin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged number of false positives.
-
tn
Description copied from interface:ClassifierEvaluationReturns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.- Specified by:
tnin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The label to use.- Returns:
- the number of true negatives.
-
tn
public double tn()Description copied from interface:ClassifierEvaluationReturns the total number of true negatives. This isn't very useful in multiclass problems.- Specified by:
tnin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The number of true negatives.
-
macroTN
public double macroTN()Description copied from interface:ClassifierEvaluationReturns the macro averaged number of true negatives.- Specified by:
macroTNin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged number of true negatives.
-
fn
Description copied from interface:ClassifierEvaluationReturns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.- Specified by:
fnin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The true label.- Returns:
- The number of false negatives.
-
fn
public double fn()Description copied from interface:ClassifierEvaluationReturns the micro averaged number of false negatives.- Specified by:
fnin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The micro averaged number of false negatives.
-
macroFN
public double macroFN()Description copied from interface:ClassifierEvaluationReturns the macro averaged number of false negatives.- Specified by:
macroFNin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged number of false negatives.
-
precision
Description copied from interface:ClassifierEvaluationReturns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.- Specified by:
precisionin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The label.- Returns:
- The precision.
-
microAveragedPrecision
public double microAveragedPrecision()Description copied from interface:ClassifierEvaluationReturns the micro averaged precision.- Specified by:
microAveragedPrecisionin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The micro averaged precision.
-
macroAveragedPrecision
public double macroAveragedPrecision()Description copied from interface:ClassifierEvaluationReturns the macro averaged precision.- Specified by:
macroAveragedPrecisionin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged precision.
-
recall
Description copied from interface:ClassifierEvaluationReturns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.- Specified by:
recallin interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The label.- Returns:
- The recall.
-
microAveragedRecall
public double microAveragedRecall()Description copied from interface:ClassifierEvaluationReturns the micro averaged recall.- Specified by:
microAveragedRecallin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The micro averaged recall.
-
macroAveragedRecall
public double macroAveragedRecall()Description copied from interface:ClassifierEvaluationReturns the macro averaged recall.- Specified by:
macroAveragedRecallin interfaceClassifierEvaluation<MultiLabel>- Returns:
- The macro averaged recall.
-
f1
Description copied from interface:ClassifierEvaluationReturns the F_1 score, i.e., the harmonic mean of the precision and recall.- Specified by:
f1in interfaceClassifierEvaluation<MultiLabel>- Parameters:
label- The label.- Returns:
- The F_1 score.
-
microAveragedF1
public double microAveragedF1()Description copied from interface:ClassifierEvaluationReturns the micro averaged F_1 across all labels.- Specified by:
microAveragedF1in interfaceClassifierEvaluation<MultiLabel>- Returns:
- The F_1 score.
-
macroAveragedF1
public double macroAveragedF1()Description copied from interface:ClassifierEvaluationReturns the macro averaged F_1 across all the labels.- Specified by:
macroAveragedF1in interfaceClassifierEvaluation<MultiLabel>- Returns:
- The F_1 score.
-
jaccardScore
public double jaccardScore()Description copied from interface:MultiLabelEvaluationThe average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.- Specified by:
jaccardScorein interfaceMultiLabelEvaluation- Returns:
- The Jaccard score.
-
asMap
Description copied from interface:EvaluationGet a map of all the metrics stored in this evaluation. The keys are metric id's and the values are their corresponding computed results.- Specified by:
asMapin interfaceEvaluation<MultiLabel>- Returns:
- a map of all stored results
-
getProvenance
- Specified by:
getProvenancein interfacecom.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
-
toString
This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.
-
get
Description copied from interface:EvaluationGets the value associated with the specific metric. ThrowsIllegalArgumentExceptionif the metric is unknown.- Specified by:
getin interfaceEvaluation<MultiLabel>- Parameters:
key- The metric to lookup.- Returns:
- The value for that metric.
-