Uses of Interface
org.tribuo.evaluation.metrics.EvaluationMetric
Packages that use EvaluationMetric
Package
Description
Evaluation classes for anomaly detection.
Evaluation classes for multi-class classification.
Evaluation classes for clustering.
Evaluation base classes, along with code for train/test splits and cross validation.
Evaluation classes for multi-label classification using 
MultiLabel.Evaluation classes for single or multi-dimensional regression.
Provides core classes for working with sequences of 
Examples.- 
Uses of EvaluationMetric in org.tribuo.anomaly.evaluationClasses in org.tribuo.anomaly.evaluation that implement EvaluationMetricModifier and TypeClassDescriptionclassA metric for evaluating anomaly detection problems.
- 
Uses of EvaluationMetric in org.tribuo.classification.evaluationClasses in org.tribuo.classification.evaluation that implement EvaluationMetricModifier and TypeClassDescriptionclass
- 
Uses of EvaluationMetric in org.tribuo.clustering.evaluationClasses in org.tribuo.clustering.evaluation that implement EvaluationMetricModifier and TypeClassDescriptionclassA metric for evaluating clustering problems.
- 
Uses of EvaluationMetric in org.tribuo.evaluationClasses in org.tribuo.evaluation with type parameters of type EvaluationMetricModifier and TypeClassDescriptionclassAbstractEvaluator<T extends Output<T>, C extends MetricContext<T>, E extends Evaluation<T>, M extends EvaluationMetric<T,C>> Base class for evaluators.Methods in org.tribuo.evaluation with parameters of type EvaluationMetricModifier and TypeMethodDescriptionstatic <T extends Output<T>, C extends MetricContext<T>>
 com.oracle.labs.mlrg.olcut.util.Pair<Integer, Double> EvaluationAggregator.argmax(EvaluationMetric<T, C> metric, List<? extends Model<T>> models, Dataset<T> dataset) Calculates the argmax of a metric across the supplied models (i.e., the index of the model which performed the best).static <T extends Output<T>, C extends MetricContext<T>>
 com.oracle.labs.mlrg.olcut.util.Pair<Integer, Double> EvaluationAggregator.argmax(EvaluationMetric<T, C> metric, Model<T> model, List<? extends Dataset<T>> datasets) Calculates the argmax of a metric across the supplied datasets.static <T extends Output<T>, C extends MetricContext<T>>
 DescriptiveStatsEvaluationAggregator.summarize(EvaluationMetric<T, C> metric, List<? extends Model<T>> models, Dataset<T> dataset) Summarize performance w.r.t.static <T extends Output<T>, C extends MetricContext<T>>
 DescriptiveStatsEvaluationAggregator.summarize(EvaluationMetric<T, C> metric, Model<T> model, List<? extends Dataset<T>> datasets) Summarize a model's performance w.r.t.Method parameters in org.tribuo.evaluation with type arguments of type EvaluationMetricModifier and TypeMethodDescriptionAbstractEvaluator.computeResults(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.static <T extends Output<T>, C extends MetricContext<T>>
 DescriptiveStatsEvaluationAggregator.summarize(List<? extends EvaluationMetric<T, C>> metrics, Model<T> model, List<Prediction<T>> predictions) Summarize model performance on dataset across several metrics.static <T extends Output<T>, C extends MetricContext<T>>
 DescriptiveStatsEvaluationAggregator.summarize(List<? extends EvaluationMetric<T, C>> metrics, Model<T> model, Dataset<T> dataset) Summarize model performance on dataset across several metrics.
- 
Uses of EvaluationMetric in org.tribuo.multilabel.evaluationClasses in org.tribuo.multilabel.evaluation that implement EvaluationMetric
- 
Uses of EvaluationMetric in org.tribuo.regression.evaluationClasses in org.tribuo.regression.evaluation that implement EvaluationMetricModifier and TypeClassDescriptionclassAEvaluationMetricforRegressors which calculates the metric based on a the true values and the predicted values.
- 
Uses of EvaluationMetric in org.tribuo.sequenceClasses in org.tribuo.sequence with type parameters of type EvaluationMetricModifier and TypeClassDescriptionclassAbstractSequenceEvaluator<T extends Output<T>, C extends MetricContext<T>, E extends SequenceEvaluation<T>, M extends EvaluationMetric<T,C>> Base class for sequence evaluators.Method parameters in org.tribuo.sequence with type arguments of type EvaluationMetricModifier and TypeMethodDescriptionAbstractSequenceEvaluator.computeResults(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.