Package org.tribuo.math.optimisers


package org.tribuo.math.optimisers
Provides implementations of StochasticGradientOptimiser.

Has implementations of SGD using a variety of simple learning rate tempering systems, along with AdaGrad, Adam, AdaDelta, RMSProp and Pegasos.

Also provides ParameterAveraging which wraps another StochasticGradientOptimiser and averages the learned parameters across the gradient descent run. This is usually used for convex problems, for non-convex ones your milage may vary.