StochasticGradientOptimiser
.See: Description
Class | Description |
---|---|
AdaDelta |
An implementation of the AdaDelta gradient optimiser.
|
AdaGrad |
An implementation of the AdaGrad gradient optimiser.
|
AdaGradRDA |
An implementation of the AdaGrad gradient optimiser with regularized dual averaging.
|
Adam |
An implementation of the Adam gradient optimiser.
|
GradientOptimiserOptions |
CLI options for configuring a gradient optimiser.
|
ParameterAveraging |
Averages the parameters across a gradient run.
|
Pegasos |
An implementation of the Pegasos gradient optimiser used primarily for solving the SVM problem.
|
RMSProp |
An implementation of the RMSProp gradient optimiser.
|
SGD |
An implementation of single learning rate SGD and optionally momentum.
|
Enum | Description |
---|---|
GradientOptimiserOptions.StochasticGradientOptimiserType |
Type of the gradient optimisers available in CLIs.
|
SGD.Momentum |
Momentum types.
|
StochasticGradientOptimiser
.
Has implementations of SGD using a variety of simple learning rate tempering systems, along with AdaGrad, Adam, AdaDelta, RMSProp and Pegasos.
Also provides ParameterAveraging
which wraps another StochasticGradientOptimiser
and averages the
learned parameters across the gradient descent run. This is usually used for convex problems, for non-convex ones
your milage may vary.
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.