public interface LabelEvaluation extends ClassifierEvaluation<Label>
ClassifierEvaluation
.Modifier and Type | Method and Description |
---|---|
double |
accuracy()
The overall accuracy of the evaluation.
|
double |
accuracy(Label label)
The per label accuracy of the evaluation.
|
double |
AUCROC(Label label)
Area under the ROC curve.
|
double |
averageAUCROC(boolean weighted)
Area under the ROC curve averaged across labels.
|
double |
averagedPrecision(Label label)
Summarises a Precision-Recall Curve by taking the weighted mean of the
precisions at a given threshold, where the weight is the recall achieved at
that threshold.
|
LabelEvaluationUtil.PRCurve |
precisionRecallCurve(Label label)
Calculates the Precision Recall curve for a single label.
|
static String |
toFormattedString(LabelEvaluation evaluation)
This method produces a nicely formatted String output, with
appropriate tabs and newlines, suitable for display on a terminal.
|
default String |
toHTML()
Returns a HTML formatted String representing this evaluation.
|
static String |
toHTML(LabelEvaluation evaluation)
This method produces a HTML formatted String output, with
appropriate tabs and newlines, suitable for integation into a webpage.
|
balancedErrorRate, confusion, f1, fn, fn, fp, fp, getConfusionMatrix, macroAveragedF1, macroAveragedPrecision, macroAveragedRecall, macroFN, macroFP, macroTN, macroTP, microAveragedF1, microAveragedPrecision, microAveragedRecall, precision, recall, tn, tn, tp, tp
asMap, get, getPredictions
double accuracy()
double accuracy(Label label)
label
- The target label.double AUCROC(Label label)
label
- target labelUnsupportedOperationException
if the model
corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.double averageAUCROC(boolean weighted)
If weighted
is false, use a macro average, if true, weight by the evaluation's observed class counts.
weighted
- If true weight by the class counts, if false use a macro average.UnsupportedOperationException
if the model
corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.double averagedPrecision(Label label)
label
- The target label.LabelEvaluationUtil.averagedPrecision(boolean[], double[])
UnsupportedOperationException
if the model
corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.LabelEvaluationUtil.PRCurve precisionRecallCurve(Label label)
label
- The target label.LabelEvaluationUtil.generatePRCurve(boolean[], double[])
UnsupportedOperationException
if the model
corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.default String toHTML()
static String toFormattedString(LabelEvaluation evaluation)
EvaluationRenderer
functional interface.evaluation
- The evaluation to format.static String toHTML(LabelEvaluation evaluation)
EvaluationRenderer
functional interface.evaluation
- The evaluation to format.Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.