This package is the necessary infrastructure for transformations. The workflow is first to build a
TransformationMap which represents the
Transformations and the order that they should be applied to the specified
Features. This can be applied to a Dataset to produce a
TransformerMap which contains a fitted set of
Transformers which can be used to apply the transformation to any
other Dataset (e.g., to apply the same transformation to training and test sets), or to be used at prediction
time to stream data through.
It also provides a
TransformTrainer which accepts a
TransformationMap and an inner
Trainer and produces a
TransformedModel which automatically transforms it's input data at
Transformations don't produce new
Features - they only modify the values of existing ones.
When doing so they can be instructed to treat Features that are absent due to sparsity as zero or as
not existing at all. Independently, we can explicitly add zero-valued Features by densifying the dataset
before the transformation is fit or before it is applied. Once they exist these Features can be altered by
Transformers and are visible to
Transformations which are
The transformation fitting methods have two parameters which alter their behaviour:
includeImplicitZeroFeatures controls if the transformation incorporates the implicit zero
valued features (i.e., the ones not present in the example but are present in the dataset's
FeatureMap) when building the transformation statistics. This is
important when working with, e.g.
IDFTransformation as it allows correct
computation of the inverse document frequency, but can be detrimental to features which are one-hot encodings of
categoricals (as they have many more implicit zeros).
densify controls if the example or dataset should have
its implicit zero valued features converted into explicit zero valued features (i.e., it makes a sparse example into
a dense one which contains an explicit value for every feature known to the dataset) before the transformation is
applied, and transformations are only applied to feature values which are present.
These parameters interact to form 4 possibilities:
- Both false: transformations are only fit on explicit feature values, and only applied to explicit feature values
- Both true: transformations include explicit features and implicit zeros, and implicit zeros are converted into explicit zeros and transformed
densifyis false: the implicit zeroes are used to fit the transformation, but not modified when the transformation is applied. This is most useful when working with text data where you want to compute IDF style statistics
densifyis true: the implicit zeros are not used to fit the transformation, but are converted to explicit zeros and transformed. This is less useful than the other three combinations, but could be used to move the minimum value, or when zero is not appropriate for a missing value and needs to be transformed.
MutableDataset.densify()before passing the data to
TransformTrainer.train(org.tribuo.Dataset<T>, java.util.Map<java.lang.String, com.oracle.labs.mlrg.olcut.provenance.Provenance>), which is equivalent to setting
includeImplicitZeroFeaturesto true and
densifyto true. To sum up, in the context of transformations
includeImplicitZeroFeaturesdetermines whether (implicit) zero-values features are measured and
densifydetermines whether they can be altered.
ClassDescriptionAn interface representing a class of transformations which can be applied to a feature.A carrier type for a set of transformations to be applied to a
Dataset.A carrier type as OLCUT does not support nested generics.A tag interface for provenances in the transformation system.TransformedModel<T extends Output<T>>Wraps a
Examples are transformed appropriately before the model makes predictions.A fitted
Transformationwhich can apply a transform to the input value.A collection of
Transformers which can be applied to a
TransformerMap.An interface for the statistics that need to be collected for a specific
Transformationon a single feature.TransformTrainer<T extends Output<T>>A
Trainerwhich encapsulates another trainer plus a
TransformationMapobject to apply to each
Datasetbefore training each