Package | Description |
---|---|
org.tribuo.anomaly.evaluation |
Evaluation classes for anomaly detection.
|
org.tribuo.classification.evaluation |
Evaluation classes for multi-class classification.
|
org.tribuo.clustering.evaluation |
Evaluation classes for clustering.
|
org.tribuo.evaluation |
Evaluation base classes, along with code for train/test splits and cross validation.
|
org.tribuo.multilabel.evaluation |
Evaluation classes for multi-label classification using
MultiLabel . |
org.tribuo.regression.evaluation |
Evaluation classes for single or multi-dimensional regression.
|
org.tribuo.sequence |
Provides core classes for working with sequences of
Example s. |
Modifier and Type | Class and Description |
---|---|
class |
AnomalyMetric
A metric for evaluating anomaly detection problems.
|
Modifier and Type | Class and Description |
---|---|
class |
LabelMetric
|
Modifier and Type | Class and Description |
---|---|
class |
ClusteringMetric
A metric for evaluating clustering problems.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>>
Base class for evaluators.
|
Modifier and Type | Method and Description |
---|---|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.argmax(EvaluationMetric<T,C> metric,
List<? extends Model<T>> models,
Dataset<T> dataset)
Calculates the argmax of a metric across the supplied models (i.e., the index of the model which performed the best).
|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.argmax(EvaluationMetric<T,C> metric,
Model<T> model,
List<? extends Dataset<T>> datasets)
Calculates the argmax of a metric across the supplied datasets.
|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.summarize(EvaluationMetric<T,C> metric,
List<? extends Model<T>> models,
Dataset<T> dataset)
Summarize performance w.r.t.
|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.summarize(EvaluationMetric<T,C> metric,
Model<T> model,
List<? extends Dataset<T>> datasets)
Summarize a model's performance w.r.t.
|
Modifier and Type | Method and Description |
---|---|
protected Map<MetricID<T>,Double> |
AbstractEvaluator.computeResults(C ctx,
Set<? extends EvaluationMetric<T,C>> metrics)
Computes each metric given the context.
|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.summarize(List<? extends EvaluationMetric<T,C>> metrics,
Model<T> model,
Dataset<T> dataset)
Summarize model performance on dataset across several metrics.
|
static <T extends Output<T>,C extends MetricContext<T>> |
EvaluationAggregator.summarize(List<? extends EvaluationMetric<T,C>> metrics,
Model<T> model,
List<Prediction<T>> predictions)
Summarize model performance on dataset across several metrics.
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLabelMetric
A
EvaluationMetric for evaluating MultiLabel problems. |
Modifier and Type | Class and Description |
---|---|
class |
RegressionMetric
A
EvaluationMetric for Regressor s which calculates the metric based on a
the true values and the predicted values. |
Modifier and Type | Class and Description |
---|---|
class |
AbstractSequenceEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends SequenceEvaluation<T>,M extends EvaluationMetric<T,C>>
Base class for sequence evaluators.
|
Modifier and Type | Method and Description |
---|---|
protected Map<MetricID<T>,Double> |
AbstractSequenceEvaluator.computeResults(C ctx,
Set<? extends EvaluationMetric<T,C>> metrics)
Computes each metric given the context.
|
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.