T
- The output type.public interface ClassifierEvaluation<T extends Classifiable<T>> extends Evaluation<T>
Modifier and Type | Method and Description |
---|---|
double |
balancedErrorRate()
Returns the balanced error rate, i.e., the mean of the per label recalls.
|
double |
confusion(T predicted,
T truth)
Returns the number of times label
truth was predicted as label predicted . |
double |
f1(T label)
Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
|
double |
fn()
Returns the micro averaged number of false negatives.
|
double |
fn(T label)
Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
|
double |
fp()
Returns the micro average of the number of false positives across all the labels, i.e., the total
number of false positives.
|
double |
fp(T label)
Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
|
ConfusionMatrix<T> |
getConfusionMatrix()
Returns the underlying confusion matrix.
|
double |
macroAveragedF1()
Returns the macro averaged F_1 across all the labels.
|
double |
macroAveragedPrecision()
Returns the macro averaged precision.
|
double |
macroAveragedRecall()
Returns the macro averaged recall.
|
double |
macroFN()
Returns the macro averaged number of false negatives.
|
double |
macroFP()
Returns the macro averaged number of false positives, averaged across the labels.
|
double |
macroTN()
Returns the macro averaged number of true negatives.
|
double |
macroTP()
Returns the macro averaged number of true positives, averaged across the labels.
|
double |
microAveragedF1()
Returns the micro averaged F_1 across all labels.
|
double |
microAveragedPrecision()
Returns the micro averaged precision.
|
double |
microAveragedRecall()
Returns the micro averaged recall.
|
double |
precision(T label)
Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
|
double |
recall(T label)
Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
|
double |
tn()
Returns the total number of true negatives.
|
double |
tn(T label)
Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
|
double |
tp()
Returns the micro average of the number of true positives across all the labels, i.e., the total
number of true positives.
|
double |
tp(T label)
Returns the number of true positives, i.e., the number of times the label was correctly predicted.
|
asMap, get, getPredictions
double confusion(T predicted, T truth)
truth
was predicted as label predicted
.predicted
- The predicted label.truth
- The true label.double tp(T label)
label
- The label to calculate.double tp()
double macroTP()
double fp(T label)
label
- the label to calculate.double fp()
double macroFP()
double tn(T label)
label
- The label to use.double tn()
double macroTN()
double fn(T label)
label
- The true label.double fn()
double macroFN()
double precision(T label)
label
- The label.double microAveragedPrecision()
double macroAveragedPrecision()
double recall(T label)
label
- The label.double microAveragedRecall()
double macroAveragedRecall()
double f1(T label)
label
- The label.double microAveragedF1()
double macroAveragedF1()
double balancedErrorRate()
ConfusionMatrix<T> getConfusionMatrix()
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.