Interface LabelEvaluation
- All Superinterfaces:
ClassifierEvaluation<Label>,Evaluation<Label>,com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
ClassifierEvaluation.-
Method Summary
Modifier and TypeMethodDescriptiondoubleaccuracy()The overall accuracy of the evaluation.doubleThe per label accuracy of the evaluation.doubleArea under the ROC curve.doubleaverageAUCROC(boolean weighted) Area under the ROC curve averaged across labels.doubleaveragedPrecision(Label label) Summarises a Precision-Recall Curve by taking the weighted mean of the precisions at a given threshold, where the weight is the recall achieved at that threshold.precisionRecallCurve(Label label) Calculates the Precision Recall curve for a single label.static StringtoFormattedString(LabelEvaluation evaluation) This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.default StringtoHTML()Returns a HTML formatted String representing this evaluation.static StringtoHTML(LabelEvaluation evaluation) This method produces a HTML formatted String output, with appropriate tabs and newlines, suitable for integration into a webpage.Methods inherited from interface org.tribuo.classification.evaluation.ClassifierEvaluation
balancedErrorRate, confusion, f1, fn, fn, fp, fp, getConfusionMatrix, macroAveragedF1, macroAveragedPrecision, macroAveragedRecall, macroFN, macroFP, macroTN, macroTP, microAveragedF1, microAveragedPrecision, microAveragedRecall, precision, recall, tn, tn, tp, tpMethods inherited from interface org.tribuo.evaluation.Evaluation
asMap, get, getPredictionsMethods inherited from interface com.oracle.labs.mlrg.olcut.provenance.Provenancable
getProvenance
-
Method Details
-
accuracy
double accuracy()The overall accuracy of the evaluation.- Returns:
- The accuracy.
-
accuracy
The per label accuracy of the evaluation.- Parameters:
label- The target label.- Returns:
- The per label accuracy.
-
AUCROC
Area under the ROC curve.- Parameters:
label- target label- Returns:
- AUC ROC score
- Implementation Requirements:
- Implementations of this class are expected to throw
UnsupportedOperationExceptionif the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
-
averageAUCROC
double averageAUCROC(boolean weighted) Area under the ROC curve averaged across labels.If
weightedis false, use a macro average, if true, weight by the evaluation's observed class counts.- Parameters:
weighted- If true weight by the class counts, if false use a macro average.- Returns:
- The average AUCROC.
- Implementation Requirements:
- Implementations of this class are expected to throw
UnsupportedOperationExceptionif the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
-
averagedPrecision
Summarises a Precision-Recall Curve by taking the weighted mean of the precisions at a given threshold, where the weight is the recall achieved at that threshold.- Parameters:
label- The target label.- Returns:
- The averaged precision for that label.
- See Also:
- Implementation Requirements:
- Implementations of this class are expected to throw
UnsupportedOperationExceptionif the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
-
precisionRecallCurve
Calculates the Precision Recall curve for a single label.- Parameters:
label- The target label.- Returns:
- The precision recall curve for that label.
- See Also:
- Implementation Requirements:
- Implementations of this class are expected to throw
UnsupportedOperationExceptionif the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
-
toHTML
Returns a HTML formatted String representing this evaluation.Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.
- Returns:
- A HTML formatted String.
-
toFormattedString
This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal. It can be used as an implementation of theEvaluationRendererfunctional interface.Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.
- Parameters:
evaluation- The evaluation to format.- Returns:
- Formatted output showing the main results of the evaluation.
-
toHTML
This method produces a HTML formatted String output, with appropriate tabs and newlines, suitable for integration into a webpage. It can be used as an implementation of theEvaluationRendererfunctional interface.Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.
- Parameters:
evaluation- The evaluation to format.- Returns:
- Formatted HTML output showing the main results of the evaluation.
-