public final class MultiLabelEvaluationImpl extends Object implements MultiLabelEvaluation
MultiLabelEvaluation
using the default metrics.
The classification metrics consider labels independently.
Modifier and Type | Method and Description |
---|---|
Map<MetricID<MultiLabel>,Double> |
asMap()
Get a map of all the metrics stored in this evaluation.
|
double |
balancedErrorRate()
Returns the balanced error rate, i.e., the mean of the per label recalls.
|
double |
confusion(MultiLabel predicted,
MultiLabel truth)
Returns the number of times label
truth was predicted as label predicted . |
double |
f1(MultiLabel label)
Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
|
double |
fn()
Returns the micro averaged number of false negatives.
|
double |
fn(MultiLabel label)
Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
|
double |
fp()
Returns the micro average of the number of false positives across all the labels, i.e., the total
number of false positives.
|
double |
fp(MultiLabel label)
Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
|
double |
get(MetricID<MultiLabel> key)
Gets the value associated with the specific metric.
|
ConfusionMatrix<MultiLabel> |
getConfusionMatrix()
Returns the underlying confusion matrix.
|
List<Prediction<MultiLabel>> |
getPredictions()
Gets the predictions stored in this evaluation.
|
EvaluationProvenance |
getProvenance() |
double |
jaccardScore()
The average across the predictions of the intersection of the true and predicted labels divided by the
union of the true and predicted labels.
|
double |
macroAveragedF1()
Returns the macro averaged F_1 across all the labels.
|
double |
macroAveragedPrecision()
Returns the macro averaged precision.
|
double |
macroAveragedRecall()
Returns the macro averaged recall.
|
double |
macroFN()
Returns the macro averaged number of false negatives.
|
double |
macroFP()
Returns the macro averaged number of false positives, averaged across the labels.
|
double |
macroTN()
Returns the macro averaged number of true negatives.
|
double |
macroTP()
Returns the macro averaged number of true positives, averaged across the labels.
|
double |
microAveragedF1()
Returns the micro averaged F_1 across all labels.
|
double |
microAveragedPrecision()
Returns the micro averaged precision.
|
double |
microAveragedRecall()
Returns the micro averaged recall.
|
double |
precision(MultiLabel label)
Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
|
double |
recall(MultiLabel label)
Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
|
double |
tn()
Returns the total number of true negatives.
|
double |
tn(MultiLabel label)
Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
|
String |
toString() |
double |
tp()
Returns the micro average of the number of true positives across all the labels, i.e., the total
number of true positives.
|
double |
tp(MultiLabel label)
Returns the number of true positives, i.e., the number of times the label was correctly predicted.
|
public List<Prediction<MultiLabel>> getPredictions()
Evaluation
getPredictions
in interface Evaluation<MultiLabel>
public double balancedErrorRate()
ClassifierEvaluation
balancedErrorRate
in interface ClassifierEvaluation<MultiLabel>
public ConfusionMatrix<MultiLabel> getConfusionMatrix()
ClassifierEvaluation
getConfusionMatrix
in interface ClassifierEvaluation<MultiLabel>
public double confusion(MultiLabel predicted, MultiLabel truth)
ClassifierEvaluation
truth
was predicted as label predicted
.confusion
in interface ClassifierEvaluation<MultiLabel>
predicted
- The predicted label.truth
- The true label.public double tp(MultiLabel label)
ClassifierEvaluation
tp
in interface ClassifierEvaluation<MultiLabel>
label
- The label to calculate.public double tp()
ClassifierEvaluation
tp
in interface ClassifierEvaluation<MultiLabel>
public double macroTP()
ClassifierEvaluation
macroTP
in interface ClassifierEvaluation<MultiLabel>
public double fp(MultiLabel label)
ClassifierEvaluation
fp
in interface ClassifierEvaluation<MultiLabel>
label
- the label to calculate.public double fp()
ClassifierEvaluation
fp
in interface ClassifierEvaluation<MultiLabel>
public double macroFP()
ClassifierEvaluation
macroFP
in interface ClassifierEvaluation<MultiLabel>
public double tn(MultiLabel label)
ClassifierEvaluation
tn
in interface ClassifierEvaluation<MultiLabel>
label
- The label to use.public double tn()
ClassifierEvaluation
tn
in interface ClassifierEvaluation<MultiLabel>
public double macroTN()
ClassifierEvaluation
macroTN
in interface ClassifierEvaluation<MultiLabel>
public double fn(MultiLabel label)
ClassifierEvaluation
fn
in interface ClassifierEvaluation<MultiLabel>
label
- The true label.public double fn()
ClassifierEvaluation
fn
in interface ClassifierEvaluation<MultiLabel>
public double macroFN()
ClassifierEvaluation
macroFN
in interface ClassifierEvaluation<MultiLabel>
public double precision(MultiLabel label)
ClassifierEvaluation
precision
in interface ClassifierEvaluation<MultiLabel>
label
- The label.public double microAveragedPrecision()
ClassifierEvaluation
microAveragedPrecision
in interface ClassifierEvaluation<MultiLabel>
public double macroAveragedPrecision()
ClassifierEvaluation
macroAveragedPrecision
in interface ClassifierEvaluation<MultiLabel>
public double recall(MultiLabel label)
ClassifierEvaluation
recall
in interface ClassifierEvaluation<MultiLabel>
label
- The label.public double microAveragedRecall()
ClassifierEvaluation
microAveragedRecall
in interface ClassifierEvaluation<MultiLabel>
public double macroAveragedRecall()
ClassifierEvaluation
macroAveragedRecall
in interface ClassifierEvaluation<MultiLabel>
public double f1(MultiLabel label)
ClassifierEvaluation
f1
in interface ClassifierEvaluation<MultiLabel>
label
- The label.public double microAveragedF1()
ClassifierEvaluation
microAveragedF1
in interface ClassifierEvaluation<MultiLabel>
public double macroAveragedF1()
ClassifierEvaluation
macroAveragedF1
in interface ClassifierEvaluation<MultiLabel>
public double jaccardScore()
MultiLabelEvaluation
jaccardScore
in interface MultiLabelEvaluation
public Map<MetricID<MultiLabel>,Double> asMap()
Evaluation
asMap
in interface Evaluation<MultiLabel>
public EvaluationProvenance getProvenance()
getProvenance
in interface com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
public double get(MetricID<MultiLabel> key)
Evaluation
IllegalArgumentException
if
the metric is unknown.get
in interface Evaluation<MultiLabel>
key
- The metric to lookup.Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.