Class LibLinearTrainer<T extends Output<T>>

java.lang.Object
org.tribuo.common.liblinear.LibLinearTrainer<T>
All Implemented Interfaces:
com.oracle.labs.mlrg.olcut.config.Configurable, com.oracle.labs.mlrg.olcut.provenance.Provenancable<TrainerProvenance>, Trainer<T>
Direct Known Subclasses:
LibLinearAnomalyTrainer, LibLinearClassificationTrainer, LibLinearRegressionTrainer

public abstract class LibLinearTrainer<T extends Output<T>> extends Object implements Trainer<T>
A Trainer which wraps a liblinear-java trainer.

See:

 Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ.
 "LIBLINEAR: A library for Large Linear Classification"
 Journal of Machine Learning Research, 2008.
 
and for the original algorithm:
 Cortes C, Vapnik V.
 "Support-Vector Networks"
 Machine Learning, 1995.
 
  • Field Details

    • libLinearParams

      protected de.bwaldvogel.liblinear.Parameter libLinearParams
    • trainerType

      @Config(description="Algorithm to use.") protected LibLinearType<T extends Output<T>> trainerType
    • cost

      @Config(description="Cost penalty for misclassifications.") protected double cost
    • maxIterations

      @Config(description="Maximum number of iterations before terminating.") protected int maxIterations
    • terminationCriterion

      @Config(description="Stop iterating when the loss score decreases by less than this value.") protected double terminationCriterion
    • epsilon

      @Config(description="Epsilon insensitivity in the regression cost function.") protected double epsilon
    • seed

      @Config(description="RNG seed.") protected long seed
  • Constructor Details

    • LibLinearTrainer

      protected LibLinearTrainer()
      For OLCUT
    • LibLinearTrainer

      protected LibLinearTrainer(LibLinearType<T> trainerType, double cost, int maxIterations, double terminationCriterion)
      Creates a trainer for a LibLinear model

      Uses Trainer.DEFAULT_SEED as the RNG seed, and 0.1 as epsilon.

      Parameters:
      trainerType - Loss function and optimisation method combination.
      cost - Cost penalty for each incorrectly classified training point.
      maxIterations - The maximum number of dataset iterations.
      terminationCriterion - How close does the optimisation function need to be before terminating that subproblem (usually set to 0.1).
    • LibLinearTrainer

      protected LibLinearTrainer(LibLinearType<T> trainerType, double cost, int maxIterations, double terminationCriterion, long seed)
      Creates a trainer for a LibLinear model
      Parameters:
      trainerType - Loss function and optimisation method combination.
      cost - Cost penalty for each incorrectly classified training point.
      maxIterations - The maximum number of dataset iterations.
      terminationCriterion - How close does the optimisation function need to be before terminating that subproblem (usually set to 0.1).
    • LibLinearTrainer

      protected LibLinearTrainer(LibLinearType<T> trainerType, double cost, int maxIterations, double terminationCriterion, double epsilon)
      Creates a trainer for a LibLinear model

      Uses Trainer.DEFAULT_SEED as the RNG seed.

      Parameters:
      trainerType - Loss function and optimisation method combination.
      cost - Cost penalty for each incorrectly classified training point.
      maxIterations - The maximum number of dataset iterations.
      terminationCriterion - How close does the optimisation function need to be before terminating that subproblem (usually set to 0.1).
      epsilon - The insensitivity of the regression loss to small differences.
    • LibLinearTrainer

      protected LibLinearTrainer(LibLinearType<T> trainerType, double cost, int maxIterations, double terminationCriterion, double epsilon, long seed)
      Creates a trainer for a LibLinear model
      Parameters:
      trainerType - Loss function and optimisation method combination.
      cost - Cost penalty for each incorrectly classified training point.
      maxIterations - The maximum number of dataset iterations.
      terminationCriterion - How close does the optimisation function need to be before terminating that subproblem (usually set to 0.1).
      epsilon - The insensitivity of the regression loss to small differences.
      seed - The RNG seed.
  • Method Details

    • postConfig

      public void postConfig()
      Used by the OLCUT configuration system, and should not be called by external code.
      Specified by:
      postConfig in interface com.oracle.labs.mlrg.olcut.config.Configurable
    • train

      public LibLinearModel<T> train(Dataset<T> examples)
      Description copied from interface: Trainer
      Trains a predictive model using the examples in the given data set.
      Specified by:
      train in interface Trainer<T extends Output<T>>
      Parameters:
      examples - the data set containing the examples.
      Returns:
      a predictive model that can be used to generate predictions for new examples.
    • train

      public LibLinearModel<T> train(Dataset<T> examples, Map<String,com.oracle.labs.mlrg.olcut.provenance.Provenance> runProvenance)
      Description copied from interface: Trainer
      Trains a predictive model using the examples in the given data set.
      Specified by:
      train in interface Trainer<T extends Output<T>>
      Parameters:
      examples - the data set containing the examples.
      runProvenance - Training run specific provenance (e.g., fold number).
      Returns:
      a predictive model that can be used to generate predictions for new examples.
    • train

      public LibLinearModel<T> train(Dataset<T> examples, Map<String,com.oracle.labs.mlrg.olcut.provenance.Provenance> runProvenance, int invocationCount)
      Description copied from interface: Trainer
      Trains a predictive model using the examples in the given data set.
      Specified by:
      train in interface Trainer<T extends Output<T>>
      Parameters:
      examples - the data set containing the examples.
      runProvenance - Training run specific provenance (e.g., fold number).
      invocationCount - The invocation counter that the trainer should be set to before training, which in most cases alters the state of the RNG inside this trainer. If the value is set to Trainer.INCREMENT_INVOCATION_COUNT then the invocation count is not changed.
      Returns:
      a predictive model that can be used to generate predictions for new examples.
    • getInvocationCount

      public int getInvocationCount()
      Description copied from interface: Trainer
      The number of times this trainer instance has had it's train method invoked.

      This is used to determine how many times the trainer's RNG has been accessed to ensure replicability in the random number stream.

      Specified by:
      getInvocationCount in interface Trainer<T extends Output<T>>
      Returns:
      The number of train invocations.
    • setInvocationCount

      public void setInvocationCount(int invocationCount)
      Description copied from interface: Trainer
      Set the internal state of the trainer to the provided number of invocations of the train method.

      This is used when reproducing a Tribuo-trained model by setting the state of the RNG to what it was at when Tribuo trained the original model by simulating invocations of the train method. This method should ALWAYS be overridden, and the default method is purely for compatibility.

      In a future major release this default implementation will be removed.

      Specified by:
      setInvocationCount in interface Trainer<T extends Output<T>>
      Parameters:
      invocationCount - the number of invocations of the train method to simulate
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • trainModels

      protected abstract List<de.bwaldvogel.liblinear.Model> trainModels(de.bwaldvogel.liblinear.Parameter curParams, int numFeatures, de.bwaldvogel.liblinear.FeatureNode[][] features, double[][] outputs)
      Train all the liblinear instances necessary for this dataset.
      Parameters:
      curParams - The LibLinear parameters.
      numFeatures - The number of features in this dataset.
      features - The features themselves.
      outputs - The outputs.
      Returns:
      A list of liblinear models.
    • createModel

      protected abstract LibLinearModel<T> createModel(ModelProvenance provenance, ImmutableFeatureMap featureIDMap, ImmutableOutputInfo<T> outputIDInfo, List<de.bwaldvogel.liblinear.Model> models)
      Construct the appropriate subtype of LibLinearModel for the prediction task.
      Parameters:
      provenance - The model provenance.
      featureIDMap - The feature id map.
      outputIDInfo - The output id info.
      models - The list of linear models.
      Returns:
      An implementation of LibLinearModel.
    • extractData

      protected abstract com.oracle.labs.mlrg.olcut.util.Pair<de.bwaldvogel.liblinear.FeatureNode[][],double[][]> extractData(Dataset<T> data, ImmutableOutputInfo<T> outputInfo, ImmutableFeatureMap featureMap)
      Extracts the features and Outputs in LibLinear's format.
      Parameters:
      data - The input data.
      outputInfo - The output info.
      featureMap - The feature info.
      Returns:
      The features and outputs.
    • setupParameters

      protected de.bwaldvogel.liblinear.Parameter setupParameters(ImmutableOutputInfo<T> info)
      Constructs the parameters. Most of the time this just clones the existing ones, but classification overrides it to incorporate label weights if they exist.
      Parameters:
      info - The output info.
      Returns:
      The Parameters to use for training.
    • exampleToNodes

      public static <T extends Output<T>> de.bwaldvogel.liblinear.FeatureNode[] exampleToNodes(Example<T> example, ImmutableFeatureMap featureIDMap, List<de.bwaldvogel.liblinear.FeatureNode> features)
      Converts a Tribuo Example into a liblinear FeatureNode array, including a bias feature.

      If there is a collision between feature ids (i.e., if there is feature hashing or some other mechanism changing the feature ids) then the feature values are summed.

      Type Parameters:
      T - The output type.
      Parameters:
      example - The input example.
      featureIDMap - The feature id map which contains the example's indices.
      features - A buffer. If null then an array list is created and used internally.
      Returns:
      The features suitable for use in liblinear.
    • getProvenance

      public TrainerProvenance getProvenance()
      Specified by:
      getProvenance in interface com.oracle.labs.mlrg.olcut.provenance.Provenancable<T extends Output<T>>