Package org.tribuo.interop.tensorflow
Enum Class GradientOptimiser
- All Implemented Interfaces:
Serializable
,Comparable<GradientOptimiser>
,Constable
An enum for the gradient optimisers exposed by TensorFlow-Java.
-
Nested Class Summary
Nested classes/interfaces inherited from class java.lang.Enum
Enum.EnumDesc<E extends Enum<E>>
-
Enum Constant Summary
Enum ConstantDescriptionThe AdaDelta optimiser.The AdaGrad optimiser.The AdaGrad Dual Averaging optimiser.The Adam optimiser.The Adamax optimiser.The FTRL optimiser.A standard gradient descent optimiser with a fixed learning rate.Gradient descent with momentum.The Nadam optimiser.Gradient descent with Nesterov momentum.The RMSprop optimiser. -
Method Summary
Modifier and TypeMethodDescription<T extends org.tensorflow.types.family.TNumber>
org.tensorflow.op.OpapplyOptimiser
(org.tensorflow.Graph graph, org.tensorflow.Operand<T> loss, Map<String, Float> optimiserParams) Applies the optimiser to the graph and returns the optimiser step operation.An unmodifiable view of the parameter names used by this gradient optimiser.boolean
validateParamNames
(Set<String> paramNames) Checks that the parameter names in the supplied set are an exact match for the parameter names that this gradient optimiser expects.static GradientOptimiser
Returns the enum constant of this class with the specified name.static GradientOptimiser[]
values()
Returns an array containing the constants of this enum class, in the order they are declared.
-
Enum Constant Details
-
ADADELTA
The AdaDelta optimiser.Parameters are:
- learningRate - the overall learning rate.
- rho - the decay factor.
- epsilon - for numerical stability.
-
ADAGRAD
The AdaGrad optimiser.Parameters are:
- learningRate - the overall learning rate.
- initialAccumulatorValue - the initialisation value for the gradient accumulator.
-
ADAGRADDA
The AdaGrad Dual Averaging optimiser.Parameters are:
- learningRate - the overall learning rate.
- initialAccumulatorValue - the initialisation value for the gradient accumulator.
- l1Strength - the strength of l1 regularisation.
- l2Strength - the strength of l2 regularisation.
-
ADAM
The Adam optimiser.Parameters are:
- learningRate - the learning rate.
- betaOne - the exponential decay rate for the 1st moment estimates.
- betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
- epsilon - a small constant for numerical stability.
-
ADAMAX
The Adamax optimiser.Parameters are:
- learningRate - the learning rate.
- betaOne - the exponential decay rate for the 1st moment estimates.
- betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
- epsilon - a small constant for numerical stability.
-
FTRL
The FTRL optimiser.Parameters are:
- learningRate - the learning rate.
- learningRatePower - controls how the learning rate decreases during training. Use zero for a fixed learning rate.
- initialAccumulatorValue - the starting value for accumulators. Only zero or positive values are allowed.
- l1Strength - the L1 Regularization strength, must be greater than or equal to zero.
- l2Strength - the L2 Regularization strength, must be greater than or equal to zero.
- l2ShrinkageRegularizationStrength - this differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
-
GRADIENT_DESCENT
A standard gradient descent optimiser with a fixed learning rate.Parameters are:
- learningRate - the learning rate.
-
MOMENTUM
Gradient descent with momentum.Parameters are:
- learningRate - the learning rate.
- momentum - the momentum scalar.
-
NESTEROV
Gradient descent with Nesterov momentum.Parameters are:
- learningRate - the learning rate.
- momentum - the momentum scalar.
-
NADAM
The Nadam optimiser.Parameters are:
- learningRate - the learning rate.
- betaOne - the exponential decay rate for the 1st moment estimates.
- betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
- epsilon - a small constant for numerical stability.
-
RMSPROP
The RMSprop optimiser.Parameters are:
- learningRate - the overall learning rate.
- decay - the decay factor.
- momentum - the momentum scalar.
- epsilon - for numerical stability.
-
-
Method Details
-
values
Returns an array containing the constants of this enum class, in the order they are declared.- Returns:
- an array containing the constants of this enum class, in the order they are declared
-
valueOf
Returns the enum constant of this class with the specified name. The string must match exactly an identifier used to declare an enum constant in this class. (Extraneous whitespace characters are not permitted.)- Parameters:
name
- the name of the enum constant to be returned.- Returns:
- the enum constant with the specified name
- Throws:
IllegalArgumentException
- if this enum class has no constant with the specified nameNullPointerException
- if the argument is null
-
getParameterNames
An unmodifiable view of the parameter names used by this gradient optimiser.- Returns:
- The parameter names.
-
validateParamNames
Checks that the parameter names in the supplied set are an exact match for the parameter names that this gradient optimiser expects.- Parameters:
paramNames
- The gradient optimiser parameter names.- Returns:
- True if the two sets intersection and union are equal.
-
applyOptimiser
public <T extends org.tensorflow.types.family.TNumber> org.tensorflow.op.Op applyOptimiser(org.tensorflow.Graph graph, org.tensorflow.Operand<T> loss, Map<String, Float> optimiserParams) Applies the optimiser to the graph and returns the optimiser step operation.- Type Parameters:
T
- The loss type (most of the time this will beTFloat32
.- Parameters:
graph
- The graph to optimise.loss
- The loss to minimise.optimiserParams
- The optimiser parameters.- Returns:
- The optimiser step operation.
-