Package | Description |
---|---|
org.tribuo.anomaly.evaluation |
Evaluation classes for anomaly detection.
|
org.tribuo.classification.evaluation |
Evaluation classes for multi-class classification.
|
org.tribuo.classification.sequence |
Provides infrastructure for
SequenceModel s which
emit Label s at each step of the sequence. |
org.tribuo.clustering.evaluation |
Evaluation classes for clustering.
|
org.tribuo.evaluation |
Evaluation base classes, along with code for train/test splits and cross validation.
|
org.tribuo.evaluation.metrics |
This package contains the infrastructure classes for building evaluation metrics.
|
org.tribuo.multilabel.evaluation |
Evaluation classes for multi-label classification using
MultiLabel . |
org.tribuo.regression.evaluation |
Evaluation classes for single or multi-dimensional regression.
|
org.tribuo.sequence |
Provides core classes for working with sequences of
Example s. |
Modifier and Type | Method and Description |
---|---|
protected AnomalyEvaluation |
AnomalyEvaluator.createEvaluation(org.tribuo.anomaly.evaluation.AnomalyMetric.Context context,
Map<MetricID<Event>,Double> results,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
protected LabelEvaluation |
LabelEvaluator.createEvaluation(LabelMetric.Context ctx,
Map<MetricID<Label>,Double> results,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
Map<MetricID<Label>,Double> |
LabelSequenceEvaluation.asMap() |
Modifier and Type | Method and Description |
---|---|
protected LabelSequenceEvaluation |
LabelSequenceEvaluator.createEvaluation(LabelMetric.Context ctx,
Map<MetricID<Label>,Double> results,
EvaluationProvenance provenance) |
Constructor and Description |
---|
LabelSequenceEvaluation(Map<MetricID<Label>,Double> results,
LabelMetric.Context ctx,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
protected ClusteringEvaluation |
ClusteringEvaluator.createEvaluation(org.tribuo.clustering.evaluation.ClusteringMetric.Context context,
Map<MetricID<ClusterID>,Double> results,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
Map<MetricID<T>,Double> |
Evaluation.asMap()
Get a map of all the metrics stored in this evaluation.
|
protected Map<MetricID<T>,Double> |
AbstractEvaluator.computeResults(C ctx,
Set<? extends EvaluationMetric<T,C>> metrics)
Computes each metric given the context.
|
static <T extends Output<T>,R extends Evaluation<T>> |
EvaluationAggregator.summarize(Evaluator<T,R> evaluator,
List<? extends Model<T>> models,
Dataset<T> dataset)
Summarize performance using the supplied evaluator across several models on one dataset.
|
static <T extends Output<T>,R extends Evaluation<T>> |
EvaluationAggregator.summarize(Evaluator<T,R> evaluator,
Model<T> model,
List<? extends Dataset<T>> datasets)
Summarize performance according to evaluator for a single model across several datasets.
|
static <T extends Output<T>,R extends Evaluation<T>> |
EvaluationAggregator.summarize(List<R> evaluations)
Summarize all fields of a list of evaluations.
|
Modifier and Type | Method and Description |
---|---|
default double |
Evaluation.get(MetricID<T> key)
Gets the value associated with the specific metric.
|
Modifier and Type | Method and Description |
---|---|
protected abstract E |
AbstractEvaluator.createEvaluation(C context,
Map<MetricID<T>,Double> results,
EvaluationProvenance provenance)
Create an evaluation for the given results
|
Modifier and Type | Method and Description |
---|---|
default MetricID<T> |
EvaluationMetric.getID()
The metric ID, a combination of the metric target and metric name.
|
Modifier and Type | Method and Description |
---|---|
Map<MetricID<MultiLabel>,Double> |
MultiLabelEvaluationImpl.asMap() |
Modifier and Type | Method and Description |
---|---|
double |
MultiLabelEvaluationImpl.get(MetricID<MultiLabel> key) |
Modifier and Type | Method and Description |
---|---|
protected MultiLabelEvaluation |
MultiLabelEvaluator.createEvaluation(org.tribuo.multilabel.evaluation.MultiLabelMetric.Context context,
Map<MetricID<MultiLabel>,Double> results,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
protected RegressionEvaluation |
RegressionEvaluator.createEvaluation(org.tribuo.regression.evaluation.RegressionMetric.Context context,
Map<MetricID<Regressor>,Double> results,
EvaluationProvenance provenance) |
Modifier and Type | Method and Description |
---|---|
Map<MetricID<T>,Double> |
SequenceEvaluation.asMap()
Get a map of all the metrics stored in this evaluation.
|
protected Map<MetricID<T>,Double> |
AbstractSequenceEvaluator.computeResults(C ctx,
Set<? extends EvaluationMetric<T,C>> metrics)
Computes each metric given the context.
|
Modifier and Type | Method and Description |
---|---|
default double |
SequenceEvaluation.get(MetricID<T> key)
Gets the value associated with the specific metric.
|
Modifier and Type | Method and Description |
---|---|
protected abstract E |
AbstractSequenceEvaluator.createEvaluation(C context,
Map<MetricID<T>,Double> results,
EvaluationProvenance provenance)
Create an evaluation for the given results
|
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.