Uses of Interface
org.tribuo.evaluation.metrics.EvaluationMetric
Package
Description
Evaluation classes for anomaly detection.
Evaluation classes for multi-class classification.
Evaluation classes for clustering.
Evaluation base classes, along with code for train/test splits and cross validation.
Evaluation classes for multi-label classification using
MultiLabel
.Evaluation classes for single or multi-dimensional regression.
Provides core classes for working with sequences of
Example
s.-
Uses of EvaluationMetric in org.tribuo.anomaly.evaluation
Modifier and TypeClassDescriptionclass
A metric for evaluating anomaly detection problems. -
Uses of EvaluationMetric in org.tribuo.classification.evaluation
Modifier and TypeClassDescriptionclass
-
Uses of EvaluationMetric in org.tribuo.clustering.evaluation
Modifier and TypeClassDescriptionclass
A metric for evaluating clustering problems. -
Uses of EvaluationMetric in org.tribuo.evaluation
Modifier and TypeClassDescriptionclass
AbstractEvaluator<T extends Output<T>,
C extends MetricContext<T>, E extends Evaluation<T>, M extends EvaluationMetric<T, C>> Base class for evaluators.Modifier and TypeMethodDescriptionstatic <T extends Output<T>,
C extends MetricContext<T>>
com.oracle.labs.mlrg.olcut.util.Pair<Integer,Double> EvaluationAggregator.argmax
(EvaluationMetric<T, C> metric, List<? extends Model<T>> models, Dataset<T> dataset) Calculates the argmax of a metric across the supplied models (i.e., the index of the model which performed the best).static <T extends Output<T>,
C extends MetricContext<T>>
com.oracle.labs.mlrg.olcut.util.Pair<Integer,Double> EvaluationAggregator.argmax
(EvaluationMetric<T, C> metric, Model<T> model, List<? extends Dataset<T>> datasets) Calculates the argmax of a metric across the supplied datasets.static <T extends Output<T>,
C extends MetricContext<T>>
DescriptiveStatsEvaluationAggregator.summarize
(EvaluationMetric<T, C> metric, List<? extends Model<T>> models, Dataset<T> dataset) Summarize performance w.r.t.static <T extends Output<T>,
C extends MetricContext<T>>
DescriptiveStatsEvaluationAggregator.summarize
(EvaluationMetric<T, C> metric, Model<T> model, List<? extends Dataset<T>> datasets) Summarize a model's performance w.r.t.Modifier and TypeMethodDescriptionAbstractEvaluator.computeResults
(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.static <T extends Output<T>,
C extends MetricContext<T>>
DescriptiveStatsEvaluationAggregator.summarize
(List<? extends EvaluationMetric<T, C>> metrics, Model<T> model, List<Prediction<T>> predictions) Summarize model performance on dataset across several metrics.static <T extends Output<T>,
C extends MetricContext<T>>
DescriptiveStatsEvaluationAggregator.summarize
(List<? extends EvaluationMetric<T, C>> metrics, Model<T> model, Dataset<T> dataset) Summarize model performance on dataset across several metrics. -
Uses of EvaluationMetric in org.tribuo.multilabel.evaluation
-
Uses of EvaluationMetric in org.tribuo.regression.evaluation
Modifier and TypeClassDescriptionclass
AEvaluationMetric
forRegressor
s which calculates the metric based on a the true values and the predicted values. -
Uses of EvaluationMetric in org.tribuo.sequence
Modifier and TypeClassDescriptionclass
AbstractSequenceEvaluator<T extends Output<T>,
C extends MetricContext<T>, E extends SequenceEvaluation<T>, M extends EvaluationMetric<T, C>> Base class for sequence evaluators.Modifier and TypeMethodDescriptionAbstractSequenceEvaluator.computeResults
(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.