Class AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>>

java.lang.Object
org.tribuo.evaluation.AbstractEvaluator<T,C,E,M>
All Implemented Interfaces:
Evaluator<T,E>
Direct Known Subclasses:
AnomalyEvaluator, ClusteringEvaluator, LabelEvaluator, MultiLabelEvaluator, RegressionEvaluator

public abstract class AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>> extends Object implements Evaluator<T,E>
Base class for evaluators.
  • Constructor Details

    • AbstractEvaluator

      public AbstractEvaluator()
  • Method Details

    • evaluate

      public final E evaluate(Model<T> model, Dataset<T> dataset)
      Produces an evaluation for the supplied model and dataset, by calling Model.predict(org.tribuo.Example<T>) to create the predictions, then aggregating the appropriate statistics.
      Specified by:
      evaluate in interface Evaluator<T extends Output<T>,C extends MetricContext<T>>
      Parameters:
      model - The model to use.
      dataset - The dataset to make predictions for.
      Returns:
      An evaluation of the dataset on the model.
    • evaluate

      public final E evaluate(Model<T> model, DataSource<T> datasource)
      Produces an evaluation for the supplied model and datasource, by calling Model.predict(org.tribuo.Example<T>) to create the predictions, then aggregating the appropriate statistics.
      Specified by:
      evaluate in interface Evaluator<T extends Output<T>,C extends MetricContext<T>>
      Parameters:
      model - The model to use.
      datasource - The datasource to make predictions for.
      Returns:
      An evaluation of the datasource on the model.
    • evaluate

      public final E evaluate(Model<T> model, List<Prediction<T>> predictions, DataProvenance dataProvenance)
      Produces an evaluation for the supplied model and predictions by aggregating the appropriate statistics.

      Warning, this method cannot validate that the predictions were returned by the model in question.

      Specified by:
      evaluate in interface Evaluator<T extends Output<T>,C extends MetricContext<T>>
      Parameters:
      model - The model to use.
      predictions - The predictions to use.
      dataProvenance - The provenance of the test data.
      Returns:
      An evaluation of the predictions.
    • computeResults

      protected Map<MetricID<T>,Double> computeResults(C ctx, Set<? extends EvaluationMetric<T,C>> metrics)
      Computes each metric given the context.
      Parameters:
      ctx - The metric context (i.e., the sufficient statistics).
      metrics - The metrics to compute.
      Returns:
      The value of each requested metric.
    • createMetrics

      protected abstract Set<M> createMetrics(Model<T> model)
      Creates the appropriate set of metrics for this model, by querying for it's OutputInfo.
      Parameters:
      model - The model to inspect.
      Returns:
      The set of metrics.
    • createContext

      protected abstract C createContext(Model<T> model, List<Prediction<T>> predictions)
      Create the context needed for evaluation. The context might store global properties or cache computation.
      Parameters:
      model - the model that will be evaluated
      predictions - the predictions that will be evaluated
      Returns:
      the context for this model and its predictions
    • createEvaluation

      protected abstract E createEvaluation(C context, Map<MetricID<T>,Double> results, EvaluationProvenance provenance)
      Create an evaluation for the given results
      Parameters:
      context - the context that was used to compute these results
      results - the results
      provenance - the provenance of the results (including information about the model and dataset)
      Returns:
      the evaluation