Interface ClassifierEvaluation<T extends Classifiable<T>>

Type Parameters:
T - The output type.
All Superinterfaces:
Evaluation<T>, com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
All Known Subinterfaces:
LabelEvaluation, MultiLabelEvaluation
All Known Implementing Classes:
MultiLabelEvaluationImpl

public interface ClassifierEvaluation<T extends Classifiable<T>> extends Evaluation<T>
Defines methods that calculate classification performance, used for both multi-class and multi-label classification.
  • Method Summary

    Modifier and Type
    Method
    Description
    double
    Returns the balanced error rate, i.e., the mean of the per label recalls.
    double
    confusion(T predicted, T truth)
    Returns the number of times label truth was predicted as label predicted.
    double
    f1(T label)
    Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
    double
    fn()
    Returns the micro averaged number of false negatives.
    double
    fn(T label)
    Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
    double
    fp()
    Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.
    double
    fp(T label)
    Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
    Returns the underlying confusion matrix.
    double
    Returns the macro averaged F_1 across all the labels.
    double
    Returns the macro averaged precision.
    double
    Returns the macro averaged recall.
    double
    Returns the macro averaged number of false negatives.
    double
    Returns the macro averaged number of false positives, averaged across the labels.
    double
    Returns the macro averaged number of true negatives.
    double
    Returns the macro averaged number of true positives, averaged across the labels.
    double
    Returns the micro averaged F_1 across all labels.
    double
    Returns the micro averaged precision.
    double
    Returns the micro averaged recall.
    double
    precision(T label)
    Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
    double
    recall(T label)
    Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
    double
    tn()
    Returns the total number of true negatives.
    double
    tn(T label)
    Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
    double
    tp()
    Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.
    double
    tp(T label)
    Returns the number of true positives, i.e., the number of times the label was correctly predicted.

    Methods inherited from interface org.tribuo.evaluation.Evaluation

    asMap, get, getPredictions

    Methods inherited from interface com.oracle.labs.mlrg.olcut.provenance.Provenancable

    getProvenance
  • Method Details

    • confusion

      double confusion(T predicted, T truth)
      Returns the number of times label truth was predicted as label predicted.
      Parameters:
      predicted - The predicted label.
      truth - The true label.
      Returns:
      The number of times the predicted label was returned for the true label.
    • tp

      double tp(T label)
      Returns the number of true positives, i.e., the number of times the label was correctly predicted.
      Parameters:
      label - The label to calculate.
      Returns:
      The number of true positives for that label.
    • tp

      double tp()
      Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.
      Returns:
      The micro averaged number of true positives.
    • macroTP

      double macroTP()
      Returns the macro averaged number of true positives, averaged across the labels.
      Returns:
      The macro averaged number of true positives.
    • fp

      double fp(T label)
      Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
      Parameters:
      label - the label to calculate.
      Returns:
      The number of false positives for that label.
    • fp

      double fp()
      Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.
      Returns:
      The micro averaged number of false positives.
    • macroFP

      double macroFP()
      Returns the macro averaged number of false positives, averaged across the labels.
      Returns:
      The macro averaged number of false positives.
    • tn

      double tn(T label)
      Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
      Parameters:
      label - The label to use.
      Returns:
      the number of true negatives.
    • tn

      double tn()
      Returns the total number of true negatives. This isn't very useful in multiclass problems.
      Returns:
      The number of true negatives.
    • macroTN

      double macroTN()
      Returns the macro averaged number of true negatives.
      Returns:
      The macro averaged number of true negatives.
    • fn

      double fn(T label)
      Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
      Parameters:
      label - The true label.
      Returns:
      The number of false negatives.
    • fn

      double fn()
      Returns the micro averaged number of false negatives.
      Returns:
      The micro averaged number of false negatives.
    • macroFN

      double macroFN()
      Returns the macro averaged number of false negatives.
      Returns:
      The macro averaged number of false negatives.
    • precision

      double precision(T label)
      Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
      Parameters:
      label - The label.
      Returns:
      The precision.
    • microAveragedPrecision

      double microAveragedPrecision()
      Returns the micro averaged precision.
      Returns:
      The micro averaged precision.
    • macroAveragedPrecision

      double macroAveragedPrecision()
      Returns the macro averaged precision.
      Returns:
      The macro averaged precision.
    • recall

      double recall(T label)
      Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
      Parameters:
      label - The label.
      Returns:
      The recall.
    • microAveragedRecall

      double microAveragedRecall()
      Returns the micro averaged recall.
      Returns:
      The micro averaged recall.
    • macroAveragedRecall

      double macroAveragedRecall()
      Returns the macro averaged recall.
      Returns:
      The macro averaged recall.
    • f1

      double f1(T label)
      Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
      Parameters:
      label - The label.
      Returns:
      The F_1 score.
    • microAveragedF1

      double microAveragedF1()
      Returns the micro averaged F_1 across all labels.
      Returns:
      The F_1 score.
    • macroAveragedF1

      double macroAveragedF1()
      Returns the macro averaged F_1 across all the labels.
      Returns:
      The F_1 score.
    • balancedErrorRate

      double balancedErrorRate()
      Returns the balanced error rate, i.e., the mean of the per label recalls.
      Returns:
      The balanced error rate.
    • getConfusionMatrix

      ConfusionMatrix<T> getConfusionMatrix()
      Returns the underlying confusion matrix.
      Returns:
      The confusion matrix.