Interface LabelEvaluation

All Superinterfaces:
ClassifierEvaluation<Label>, Evaluation<Label>, com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>

public interface LabelEvaluation extends ClassifierEvaluation<Label>
Adds multi-class classification specific metrics to ClassifierEvaluation.
  • Method Details

    • accuracy

      double accuracy()
      The overall accuracy of the evaluation.
      Returns:
      The accuracy.
    • accuracy

      double accuracy(Label label)
      The per label accuracy of the evaluation.
      Parameters:
      label - The target label.
      Returns:
      The per label accuracy.
    • AUCROC

      double AUCROC(Label label)
      Area under the ROC curve.
      Parameters:
      label - target label
      Returns:
      AUC ROC score
      Implementation Requirements:
      Implementations of this class are expected to throw UnsupportedOperationException if the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
    • averageAUCROC

      double averageAUCROC(boolean weighted)
      Area under the ROC curve averaged across labels.

      If weighted is false, use a macro average, if true, weight by the evaluation's observed class counts.

      Parameters:
      weighted - If true weight by the class counts, if false use a macro average.
      Returns:
      The average AUCROC.
      Implementation Requirements:
      Implementations of this class are expected to throw UnsupportedOperationException if the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
    • averagedPrecision

      double averagedPrecision(Label label)
      Summarises a Precision-Recall Curve by taking the weighted mean of the precisions at a given threshold, where the weight is the recall achieved at that threshold.
      Parameters:
      label - The target label.
      Returns:
      The averaged precision for that label.
      See Also:
      Implementation Requirements:
      Implementations of this class are expected to throw UnsupportedOperationException if the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
    • precisionRecallCurve

      LabelEvaluationUtil.PRCurve precisionRecallCurve(Label label)
      Calculates the Precision Recall curve for a single label.
      Parameters:
      label - The target label.
      Returns:
      The precision recall curve for that label.
      See Also:
      Implementation Requirements:
      Implementations of this class are expected to throw UnsupportedOperationException if the model corresponding to this evaluation does not generate probabilities, which are required to compute the ROC curve.
    • toHTML

      default String toHTML()
      Returns a HTML formatted String representing this evaluation.

      Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.

      Returns:
      A HTML formatted String.
    • toFormattedString

      static String toFormattedString(LabelEvaluation evaluation)
      This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal. It can be used as an implementation of the EvaluationRenderer functional interface.

      Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.

      Parameters:
      evaluation - The evaluation to format.
      Returns:
      Formatted output showing the main results of the evaluation.
    • toHTML

      static String toHTML(LabelEvaluation evaluation)
      This method produces a HTML formatted String output, with appropriate tabs and newlines, suitable for integration into a webpage. It can be used as an implementation of the EvaluationRenderer functional interface.

      Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.

      Parameters:
      evaluation - The evaluation to format.
      Returns:
      Formatted HTML output showing the main results of the evaluation.