Uses of Class
org.tribuo.evaluation.metrics.MetricID
Packages that use MetricID
Package
Description
Evaluation classes for anomaly detection.
Evaluation classes for multi-class classification.
Provides infrastructure for
SequenceModel
s which
emit Label
s at each step of the sequence.Evaluation classes for clustering.
Evaluation base classes, along with code for train/test splits and cross validation.
This package contains the infrastructure classes for building evaluation metrics.
Evaluation classes for multi-label classification using
MultiLabel
.Evaluation classes for single or multi-dimensional regression.
Provides core classes for working with sequences of
Example
s.-
Uses of MetricID in org.tribuo.anomaly.evaluation
Method parameters in org.tribuo.anomaly.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected AnomalyEvaluation
AnomalyEvaluator.createEvaluation
(org.tribuo.anomaly.evaluation.AnomalyMetric.Context context, Map<MetricID<Event>, Double> results, EvaluationProvenance provenance) -
Uses of MetricID in org.tribuo.classification.evaluation
Method parameters in org.tribuo.classification.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected LabelEvaluation
LabelEvaluator.createEvaluation
(LabelMetric.Context ctx, Map<MetricID<Label>, Double> results, EvaluationProvenance provenance) -
Uses of MetricID in org.tribuo.classification.sequence
Methods in org.tribuo.classification.sequence that return types with arguments of type MetricIDMethod parameters in org.tribuo.classification.sequence with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected LabelSequenceEvaluation
LabelSequenceEvaluator.createEvaluation
(LabelMetric.Context ctx, Map<MetricID<Label>, Double> results, EvaluationProvenance provenance) Constructor parameters in org.tribuo.classification.sequence with type arguments of type MetricIDModifierConstructorDescriptionprotected
LabelSequenceEvaluation
(Map<MetricID<Label>, Double> results, LabelMetric.Context ctx, EvaluationProvenance provenance) Constructs a LabelSequenceEvaluation using the supplied parameters. -
Uses of MetricID in org.tribuo.clustering.evaluation
Method parameters in org.tribuo.clustering.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected ClusteringEvaluation
ClusteringEvaluator.createEvaluation
(org.tribuo.clustering.evaluation.ClusteringMetric.Context context, Map<MetricID<ClusterID>, Double> results, EvaluationProvenance provenance) -
Uses of MetricID in org.tribuo.evaluation
Methods in org.tribuo.evaluation that return types with arguments of type MetricIDModifier and TypeMethodDescriptionEvaluation.asMap()
Get a map of all the metrics stored in this evaluation.AbstractEvaluator.computeResults
(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.static <T extends Output<T>,
R extends Evaluation<T>>
Map<MetricID<T>,DescriptiveStats> Summarize all fields of a list of evaluations.static <T extends Output<T>,
R extends Evaluation<T>>
Map<MetricID<T>,DescriptiveStats> EvaluationAggregator.summarize
(Evaluator<T, R> evaluator, List<? extends Model<T>> models, Dataset<T> dataset) Summarize performance using the supplied evaluator across several models on one dataset.static <T extends Output<T>,
R extends Evaluation<T>>
Map<MetricID<T>,DescriptiveStats> EvaluationAggregator.summarize
(Evaluator<T, R> evaluator, Model<T> model, List<? extends Dataset<T>> datasets) Summarize performance according to evaluator for a single model across several datasets.static <T extends Output<T>,
R extends Evaluation<T>>
Map<MetricID<T>,DescriptiveStats> EvaluationAggregator.summarizeCrossValidation
(List<com.oracle.labs.mlrg.olcut.util.Pair<R, Model<T>>> evaluations) Summarize all fields of a list of evaluations produced byCrossValidation
.Methods in org.tribuo.evaluation with parameters of type MetricIDModifier and TypeMethodDescriptiondefault double
Gets the value associated with the specific metric.Method parameters in org.tribuo.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected abstract E
AbstractEvaluator.createEvaluation
(C context, Map<MetricID<T>, Double> results, EvaluationProvenance provenance) Create an evaluation for the given results -
Uses of MetricID in org.tribuo.evaluation.metrics
Methods in org.tribuo.evaluation.metrics that return MetricID -
Uses of MetricID in org.tribuo.multilabel.evaluation
Methods in org.tribuo.multilabel.evaluation that return types with arguments of type MetricIDMethods in org.tribuo.multilabel.evaluation with parameters of type MetricIDMethod parameters in org.tribuo.multilabel.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected MultiLabelEvaluation
MultiLabelEvaluator.createEvaluation
(org.tribuo.multilabel.evaluation.MultiLabelMetric.Context context, Map<MetricID<MultiLabel>, Double> results, EvaluationProvenance provenance) -
Uses of MetricID in org.tribuo.regression.evaluation
Method parameters in org.tribuo.regression.evaluation with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected RegressionEvaluation
RegressionEvaluator.createEvaluation
(org.tribuo.regression.evaluation.RegressionMetric.Context context, Map<MetricID<Regressor>, Double> results, EvaluationProvenance provenance) -
Uses of MetricID in org.tribuo.sequence
Methods in org.tribuo.sequence that return types with arguments of type MetricIDModifier and TypeMethodDescriptionSequenceEvaluation.asMap()
Get a map of all the metrics stored in this evaluation.AbstractSequenceEvaluator.computeResults
(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.Methods in org.tribuo.sequence with parameters of type MetricIDModifier and TypeMethodDescriptiondefault double
Gets the value associated with the specific metric.Method parameters in org.tribuo.sequence with type arguments of type MetricIDModifier and TypeMethodDescriptionprotected abstract E
AbstractSequenceEvaluator.createEvaluation
(C context, Map<MetricID<T>, Double> results, EvaluationProvenance provenance) Create an evaluation for the given results