public abstract class AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>> extends Object implements Evaluator<T,E>
Constructor and Description |
---|
AbstractEvaluator() |
Modifier and Type | Method and Description |
---|---|
protected Map<MetricID<T>,Double> |
computeResults(C ctx,
Set<? extends EvaluationMetric<T,C>> metrics)
Computes each metric given the context.
|
protected abstract C |
createContext(Model<T> model,
List<Prediction<T>> predictions)
Create the context needed for evaluation.
|
protected abstract E |
createEvaluation(C context,
Map<MetricID<T>,Double> results,
EvaluationProvenance provenance)
Create an evaluation for the given results
|
protected abstract Set<M> |
createMetrics(Model<T> model)
Creates the appropriate set of metrics for this model, by querying for it's
OutputInfo . |
E |
evaluate(Model<T> model,
Dataset<T> dataset)
Produces an evaluation for the supplied model and dataset, by calling
Model.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics. |
E |
evaluate(Model<T> model,
DataSource<T> datasource)
Produces an evaluation for the supplied model and datasource, by calling
Model.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics. |
E |
evaluate(Model<T> model,
List<Prediction<T>> predictions,
DataProvenance dataProvenance)
Produces an evaluation for the supplied model and predictions by aggregating the appropriate statistics.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
createOnlineEvaluator, evaluate
public final E evaluate(Model<T> model, Dataset<T> dataset)
Model.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics.public final E evaluate(Model<T> model, DataSource<T> datasource)
Model.predict(org.tribuo.Example<T>)
to create the predictions, then aggregating the appropriate statistics.public final E evaluate(Model<T> model, List<Prediction<T>> predictions, DataProvenance dataProvenance)
Warning, this method cannot validate that the predictions were returned by the model in question.
protected Map<MetricID<T>,Double> computeResults(C ctx, Set<? extends EvaluationMetric<T,C>> metrics)
ctx
- The metric context (i.e., the sufficient statistics).metrics
- The metrics to compute.protected abstract Set<M> createMetrics(Model<T> model)
OutputInfo
.model
- The model to inspect.protected abstract C createContext(Model<T> model, List<Prediction<T>> predictions)
model
- the model that will be evaluatedpredictions
- the predictions that will be evaluatedprotected abstract E createEvaluation(C context, Map<MetricID<T>,Double> results, EvaluationProvenance provenance)
context
- the context that was used to compute these resultsresults
- the resultsprovenance
- the provenance of the results (including information about the model and dataset)Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.