Class MultiLabelEvaluationImpl

java.lang.Object
org.tribuo.multilabel.evaluation.MultiLabelEvaluationImpl
All Implemented Interfaces:
com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>, ClassifierEvaluation<MultiLabel>, Evaluation<MultiLabel>, MultiLabelEvaluation

public final class MultiLabelEvaluationImpl extends Object implements MultiLabelEvaluation
The implementation of a MultiLabelEvaluation using the default metrics.

The classification metrics consider labels independently.

  • Method Summary

    Modifier and Type
    Method
    Description
    Get a map of all the metrics stored in this evaluation.
    double
    Returns the balanced error rate, i.e., the mean of the per label recalls.
    double
    confusion(MultiLabel predicted, MultiLabel truth)
    Returns the number of times label truth was predicted as label predicted.
    double
    f1(MultiLabel label)
    Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
    double
    fn()
    Returns the micro averaged number of false negatives.
    double
    fn(MultiLabel label)
    Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
    double
    fp()
    Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.
    double
    fp(MultiLabel label)
    Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
    double
    Gets the value associated with the specific metric.
    Returns the underlying confusion matrix.
    Gets the predictions stored in this evaluation.
     
    double
    The average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.
    double
    Returns the macro averaged F_1 across all the labels.
    double
    Returns the macro averaged precision.
    double
    Returns the macro averaged recall.
    double
    Returns the macro averaged number of false negatives.
    double
    Returns the macro averaged number of false positives, averaged across the labels.
    double
    Returns the macro averaged number of true negatives.
    double
    Returns the macro averaged number of true positives, averaged across the labels.
    double
    Returns the micro averaged F_1 across all labels.
    double
    Returns the micro averaged precision.
    double
    Returns the micro averaged recall.
    double
    Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
    double
    Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
    double
    tn()
    Returns the total number of true negatives.
    double
    tn(MultiLabel label)
    Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
    This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.
    double
    tp()
    Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.
    double
    tp(MultiLabel label)
    Returns the number of true positives, i.e., the number of times the label was correctly predicted.

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
  • Method Details

    • getPredictions

      public List<Prediction<MultiLabel>> getPredictions()
      Description copied from interface: Evaluation
      Gets the predictions stored in this evaluation.
      Specified by:
      getPredictions in interface Evaluation<MultiLabel>
      Returns:
      The predictions.
    • balancedErrorRate

      public double balancedErrorRate()
      Description copied from interface: ClassifierEvaluation
      Returns the balanced error rate, i.e., the mean of the per label recalls.
      Specified by:
      balancedErrorRate in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The balanced error rate.
    • getConfusionMatrix

      public ConfusionMatrix<MultiLabel> getConfusionMatrix()
      Description copied from interface: ClassifierEvaluation
      Returns the underlying confusion matrix.
      Specified by:
      getConfusionMatrix in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The confusion matrix.
    • confusion

      public double confusion(MultiLabel predicted, MultiLabel truth)
      Description copied from interface: ClassifierEvaluation
      Returns the number of times label truth was predicted as label predicted.
      Specified by:
      confusion in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      predicted - The predicted label.
      truth - The true label.
      Returns:
      The number of times the predicted label was returned for the true label.
    • tp

      public double tp(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the number of true positives, i.e., the number of times the label was correctly predicted.
      Specified by:
      tp in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The label to calculate.
      Returns:
      The number of true positives for that label.
    • tp

      public double tp()
      Description copied from interface: ClassifierEvaluation
      Returns the micro average of the number of true positives across all the labels, i.e., the total number of true positives.
      Specified by:
      tp in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The micro averaged number of true positives.
    • macroTP

      public double macroTP()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged number of true positives, averaged across the labels.
      Specified by:
      macroTP in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged number of true positives.
    • fp

      public double fp(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the number of false positives, i.e., the number of times this label was predicted but it was not the true label..
      Specified by:
      fp in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - the label to calculate.
      Returns:
      The number of false positives for that label.
    • fp

      public double fp()
      Description copied from interface: ClassifierEvaluation
      Returns the micro average of the number of false positives across all the labels, i.e., the total number of false positives.
      Specified by:
      fp in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The micro averaged number of false positives.
    • macroFP

      public double macroFP()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged number of false positives, averaged across the labels.
      Specified by:
      macroFP in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged number of false positives.
    • tn

      public double tn(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the number of true negatives for that label, i.e., the number of times it wasn't predicted, and was not the true label.
      Specified by:
      tn in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The label to use.
      Returns:
      the number of true negatives.
    • tn

      public double tn()
      Description copied from interface: ClassifierEvaluation
      Returns the total number of true negatives. This isn't very useful in multiclass problems.
      Specified by:
      tn in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The number of true negatives.
    • macroTN

      public double macroTN()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged number of true negatives.
      Specified by:
      macroTN in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged number of true negatives.
    • fn

      public double fn(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the number of false negatives, i.e., the number of times the true label was incorrectly predicted as another label.
      Specified by:
      fn in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The true label.
      Returns:
      The number of false negatives.
    • fn

      public double fn()
      Description copied from interface: ClassifierEvaluation
      Returns the micro averaged number of false negatives.
      Specified by:
      fn in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The micro averaged number of false negatives.
    • macroFN

      public double macroFN()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged number of false negatives.
      Specified by:
      macroFN in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged number of false negatives.
    • precision

      public double precision(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the precision of this label, i.e., the number of true positives divided by the number of true positives plus false positives.
      Specified by:
      precision in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The label.
      Returns:
      The precision.
    • microAveragedPrecision

      public double microAveragedPrecision()
      Description copied from interface: ClassifierEvaluation
      Returns the micro averaged precision.
      Specified by:
      microAveragedPrecision in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The micro averaged precision.
    • macroAveragedPrecision

      public double macroAveragedPrecision()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged precision.
      Specified by:
      macroAveragedPrecision in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged precision.
    • recall

      public double recall(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the recall of this label, i.e., the number of true positives divided by the number of true positives plus false negatives.
      Specified by:
      recall in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The label.
      Returns:
      The recall.
    • microAveragedRecall

      public double microAveragedRecall()
      Description copied from interface: ClassifierEvaluation
      Returns the micro averaged recall.
      Specified by:
      microAveragedRecall in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The micro averaged recall.
    • macroAveragedRecall

      public double macroAveragedRecall()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged recall.
      Specified by:
      macroAveragedRecall in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The macro averaged recall.
    • f1

      public double f1(MultiLabel label)
      Description copied from interface: ClassifierEvaluation
      Returns the F_1 score, i.e., the harmonic mean of the precision and recall.
      Specified by:
      f1 in interface ClassifierEvaluation<MultiLabel>
      Parameters:
      label - The label.
      Returns:
      The F_1 score.
    • microAveragedF1

      public double microAveragedF1()
      Description copied from interface: ClassifierEvaluation
      Returns the micro averaged F_1 across all labels.
      Specified by:
      microAveragedF1 in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The F_1 score.
    • macroAveragedF1

      public double macroAveragedF1()
      Description copied from interface: ClassifierEvaluation
      Returns the macro averaged F_1 across all the labels.
      Specified by:
      macroAveragedF1 in interface ClassifierEvaluation<MultiLabel>
      Returns:
      The F_1 score.
    • jaccardScore

      public double jaccardScore()
      Description copied from interface: MultiLabelEvaluation
      The average across the predictions of the intersection of the true and predicted labels divided by the union of the true and predicted labels.
      Specified by:
      jaccardScore in interface MultiLabelEvaluation
      Returns:
      The Jaccard score.
    • asMap

      public Map<MetricID<MultiLabel>,Double> asMap()
      Description copied from interface: Evaluation
      Get a map of all the metrics stored in this evaluation. The keys are metric id's and the values are their corresponding computed results.
      Specified by:
      asMap in interface Evaluation<MultiLabel>
      Returns:
      a map of all stored results
    • getProvenance

      public EvaluationProvenance getProvenance()
      Specified by:
      getProvenance in interface com.oracle.labs.mlrg.olcut.provenance.Provenancable<EvaluationProvenance>
    • toString

      public String toString()
      This method produces a nicely formatted String output, with appropriate tabs and newlines, suitable for display on a terminal.

      Uses the label order of the confusion matrix, which can be used to display a subset of the per label metrics. When they are subset the total row represents only the subset selected, not all the predictions, however the accuracy and averaged metrics cover all the predictions.

      Overrides:
      toString in class Object
      Returns:
      Formatted output showing the main results of the evaluation.
    • get

      public double get(MetricID<MultiLabel> key)
      Description copied from interface: Evaluation
      Gets the value associated with the specific metric. Throws IllegalArgumentException if the metric is unknown.
      Specified by:
      get in interface Evaluation<MultiLabel>
      Parameters:
      key - The metric to lookup.
      Returns:
      The value for that metric.