Package org.tribuo.multilabel.evaluation
Class MultiLabelEvaluator
java.lang.Object
org.tribuo.evaluation.AbstractEvaluator<MultiLabel,org.tribuo.multilabel.evaluation.MultiLabelMetric.Context,MultiLabelEvaluation,MultiLabelMetric>
org.tribuo.multilabel.evaluation.MultiLabelEvaluator
- All Implemented Interfaces:
Evaluator<MultiLabel,
MultiLabelEvaluation>
public class MultiLabelEvaluator
extends AbstractEvaluator<MultiLabel,org.tribuo.multilabel.evaluation.MultiLabelMetric.Context,MultiLabelEvaluation,MultiLabelMetric>
An
Evaluator
for MultiLabel
problems.
If the dataset contains an unknown MultiLabel (as generated by MultiLabelFactory.getUnknownOutput()
)
or a valid MultiLabel which is outside of the domain of the Model
then the evaluate methods will
throw IllegalArgumentException
with an appropriate message.
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprotected org.tribuo.multilabel.evaluation.MultiLabelMetric.Context
createContext
(Model<MultiLabel> model, List<Prediction<MultiLabel>> predictions) Create the context needed for evaluation.protected MultiLabelEvaluation
createEvaluation
(org.tribuo.multilabel.evaluation.MultiLabelMetric.Context context, Map<MetricID<MultiLabel>, Double> results, EvaluationProvenance provenance) Create an evaluation for the given resultsprotected Set<MultiLabelMetric>
createMetrics
(Model<MultiLabel> model) Creates the appropriate set of metrics for this model, by querying for it'sOutputInfo
.Methods inherited from class org.tribuo.evaluation.AbstractEvaluator
computeResults, evaluate, evaluate, evaluate
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.tribuo.evaluation.Evaluator
createOnlineEvaluator, evaluate
-
Constructor Details
-
MultiLabelEvaluator
public MultiLabelEvaluator()
-
-
Method Details
-
createMetrics
Description copied from class:AbstractEvaluator
Creates the appropriate set of metrics for this model, by querying for it'sOutputInfo
.- Specified by:
createMetrics
in classAbstractEvaluator<MultiLabel,
org.tribuo.multilabel.evaluation.MultiLabelMetric.Context, MultiLabelEvaluation, MultiLabelMetric> - Parameters:
model
- The model to inspect.- Returns:
- The set of metrics.
-
createContext
protected org.tribuo.multilabel.evaluation.MultiLabelMetric.Context createContext(Model<MultiLabel> model, List<Prediction<MultiLabel>> predictions) Description copied from class:AbstractEvaluator
Create the context needed for evaluation. The context might store global properties or cache computation.- Specified by:
createContext
in classAbstractEvaluator<MultiLabel,
org.tribuo.multilabel.evaluation.MultiLabelMetric.Context, MultiLabelEvaluation, MultiLabelMetric> - Parameters:
model
- the model that will be evaluatedpredictions
- the predictions that will be evaluated- Returns:
- the context for this model and its predictions
-
createEvaluation
protected MultiLabelEvaluation createEvaluation(org.tribuo.multilabel.evaluation.MultiLabelMetric.Context context, Map<MetricID<MultiLabel>, Double> results, EvaluationProvenance provenance) Description copied from class:AbstractEvaluator
Create an evaluation for the given results- Specified by:
createEvaluation
in classAbstractEvaluator<MultiLabel,
org.tribuo.multilabel.evaluation.MultiLabelMetric.Context, MultiLabelEvaluation, MultiLabelMetric> - Parameters:
context
- the context that was used to compute these resultsresults
- the resultsprovenance
- the provenance of the results (including information about the model and dataset)- Returns:
- the evaluation
-