Package org.tribuo.math.optimisers


package org.tribuo.math.optimisers
Provides implementations of StochasticGradientOptimiser.

Has implementations of SGD using a variety of simple learning rate tempering systems, along with AdaGrad, Adam, AdaDelta, RMSProp and Pegasos.

Also provides ParameterAveraging which wraps another StochasticGradientOptimiser and averages the learned parameters across the gradient descent run. This is usually used for convex problems, for non-convex ones your milage may vary.

  • Class
    Description
    An implementation of the AdaDelta gradient optimiser.
    An implementation of the AdaGrad gradient optimiser.
    An implementation of the AdaGrad gradient optimiser with regularized dual averaging.
    An implementation of the Adam gradient optimiser.
    CLI options for configuring a gradient optimiser.
    Type of the gradient optimisers available in CLIs.
    Averages the parameters across a gradient run.
    An implementation of the Pegasos gradient optimiser used primarily for solving the SVM problem.
    An implementation of the RMSProp gradient optimiser.
    An implementation of single learning rate SGD and optionally momentum.
    Momentum types.