Package org.tribuo.evaluation
Class AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>>
java.lang.Object
org.tribuo.evaluation.AbstractEvaluator<T,C,E,M>
- All Implemented Interfaces:
Evaluator<T,
E>
- Direct Known Subclasses:
AnomalyEvaluator
,ClusteringEvaluator
,LabelEvaluator
,MultiLabelEvaluator
,RegressionEvaluator
public abstract class AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>>
extends Object
implements Evaluator<T,E>
Base class for evaluators.
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptioncomputeResults
(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.protected abstract C
createContext
(Model<T> model, List<Prediction<T>> predictions) Create the context needed for evaluation.protected abstract E
createEvaluation
(C context, Map<MetricID<T>, Double> results, EvaluationProvenance provenance) Create an evaluation for the given resultscreateMetrics
(Model<T> model) Creates the appropriate set of metrics for this model, by querying for it'sOutputInfo
.final E
evaluate
(Model<T> model, List<Prediction<T>> predictions, DataProvenance dataProvenance) Produces an evaluation for the supplied model and predictions by aggregating the appropriate statistics.final E
Produces an evaluation for the supplied model and dataset, by callingModel.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics.final E
evaluate
(Model<T> model, DataSource<T> datasource) Produces an evaluation for the supplied model and datasource, by callingModel.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.tribuo.evaluation.Evaluator
createOnlineEvaluator, evaluate
-
Constructor Details
-
AbstractEvaluator
public AbstractEvaluator()
-
-
Method Details
-
evaluate
Produces an evaluation for the supplied model and dataset, by callingModel.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics. -
evaluate
Produces an evaluation for the supplied model and datasource, by callingModel.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics. -
evaluate
public final E evaluate(Model<T> model, List<Prediction<T>> predictions, DataProvenance dataProvenance) Produces an evaluation for the supplied model and predictions by aggregating the appropriate statistics.Warning, this method cannot validate that the predictions were returned by the model in question.
-
computeResults
protected Map<MetricID<T>,Double> computeResults(C ctx, Set<? extends EvaluationMetric<T, C>> metrics) Computes each metric given the context.- Parameters:
ctx
- The metric context (i.e., the sufficient statistics).metrics
- The metrics to compute.- Returns:
- The value of each requested metric.
-
createMetrics
Creates the appropriate set of metrics for this model, by querying for it'sOutputInfo
.- Parameters:
model
- The model to inspect.- Returns:
- The set of metrics.
-
createContext
Create the context needed for evaluation. The context might store global properties or cache computation.- Parameters:
model
- the model that will be evaluatedpredictions
- the predictions that will be evaluated- Returns:
- the context for this model and its predictions
-
createEvaluation
protected abstract E createEvaluation(C context, Map<MetricID<T>, Double> results, EvaluationProvenance provenance) Create an evaluation for the given results- Parameters:
context
- the context that was used to compute these resultsresults
- the resultsprovenance
- the provenance of the results (including information about the model and dataset)- Returns:
- the evaluation
-