See: Description
Interface | Description |
---|---|
Evaluation<T extends Output<T>> |
An immutable evaluation of a specific model and dataset.
|
EvaluationRenderer<T extends Output<T>,E extends Evaluation<T>> |
Renders an
Evaluation into a String. |
Evaluator<T extends Output<T>,E extends Evaluation<T>> |
An evaluation factory which produces immutable
Evaluation s of a given Dataset using the given Model . |
Class | Description |
---|---|
AbstractEvaluator<T extends Output<T>,C extends MetricContext<T>,E extends Evaluation<T>,M extends EvaluationMetric<T,C>> |
Base class for evaluators.
|
CrossValidation<T extends Output<T>,E extends Evaluation<T>> |
A class that does k-fold cross-validation.
|
DescriptiveStats |
Descriptive statistics calculated across a list of doubles.
|
EvaluationAggregator |
Aggregates metrics from a list of evaluations, or a list of models and datasets.
|
KFoldSplitter<T extends Output<T>> |
A k-fold splitter to be used in cross-validation.
|
KFoldSplitter.TrainTestFold<T extends Output<T>> |
Stores a train/test split for a dataset.
|
OnlineEvaluator<T extends Output<T>,E extends Evaluation<T>> |
An evaluator which aggregates predictions and produces
Evaluation s
covering all the Prediction s it has seen or created. |
TrainTestSplitter<T extends Output<T>> |
Splits data into training and testing sets.
|
TrainTestSplitter.SplitDataSourceProvenance |
Provenance for a split data source.
|
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.