Enum Class GradientOptimiser

java.lang.Object
java.lang.Enum<GradientOptimiser>
org.tribuo.interop.tensorflow.GradientOptimiser
All Implemented Interfaces:
Serializable, Comparable<GradientOptimiser>, Constable

public enum GradientOptimiser extends Enum<GradientOptimiser>
An enum for the gradient optimisers exposed by TensorFlow-Java.
  • Enum Constant Details

    • ADADELTA

      public static final GradientOptimiser ADADELTA
      The AdaDelta optimiser.

      Parameters are:

      • learningRate - the overall learning rate.
      • rho - the decay factor.
      • epsilon - for numerical stability.
    • ADAGRAD

      public static final GradientOptimiser ADAGRAD
      The AdaGrad optimiser.

      Parameters are:

      • learningRate - the overall learning rate.
      • initialAccumulatorValue - the initialisation value for the gradient accumulator.
    • ADAGRADDA

      public static final GradientOptimiser ADAGRADDA
      The AdaGrad Dual Averaging optimiser.

      Parameters are:

      • learningRate - the overall learning rate.
      • initialAccumulatorValue - the initialisation value for the gradient accumulator.
      • l1Strength - the strength of l1 regularisation.
      • l2Strength - the strength of l2 regularisation.
    • ADAM

      public static final GradientOptimiser ADAM
      The Adam optimiser.

      Parameters are:

      • learningRate - the learning rate.
      • betaOne - the exponential decay rate for the 1st moment estimates.
      • betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
      • epsilon - a small constant for numerical stability.
    • ADAMAX

      public static final GradientOptimiser ADAMAX
      The Adamax optimiser.

      Parameters are:

      • learningRate - the learning rate.
      • betaOne - the exponential decay rate for the 1st moment estimates.
      • betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
      • epsilon - a small constant for numerical stability.
    • FTRL

      public static final GradientOptimiser FTRL
      The FTRL optimiser.

      Parameters are:

      • learningRate - the learning rate.
      • learningRatePower - controls how the learning rate decreases during training. Use zero for a fixed learning rate.
      • initialAccumulatorValue - the starting value for accumulators. Only zero or positive values are allowed.
      • l1Strength - the L1 Regularization strength, must be greater than or equal to zero.
      • l2Strength - the L2 Regularization strength, must be greater than or equal to zero.
      • l2ShrinkageRegularizationStrength - this differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
    • GRADIENT_DESCENT

      public static final GradientOptimiser GRADIENT_DESCENT
      A standard gradient descent optimiser with a fixed learning rate.

      Parameters are:

      • learningRate - the learning rate.
    • MOMENTUM

      public static final GradientOptimiser MOMENTUM
      Gradient descent with momentum.

      Parameters are:

      • learningRate - the learning rate.
      • momentum - the momentum scalar.
    • NESTEROV

      public static final GradientOptimiser NESTEROV
      Gradient descent with Nesterov momentum.

      Parameters are:

      • learningRate - the learning rate.
      • momentum - the momentum scalar.
    • NADAM

      public static final GradientOptimiser NADAM
      The Nadam optimiser.

      Parameters are:

      • learningRate - the learning rate.
      • betaOne - the exponential decay rate for the 1st moment estimates.
      • betaTwo - the exponential decay rate for the exponentially weighted infinity norm.
      • epsilon - a small constant for numerical stability.
    • RMSPROP

      public static final GradientOptimiser RMSPROP
      The RMSprop optimiser.

      Parameters are:

      • learningRate - the overall learning rate.
      • decay - the decay factor.
      • momentum - the momentum scalar.
      • epsilon - for numerical stability.
      This optimiser is currently uncentered.
  • Method Details

    • values

      public static GradientOptimiser[] values()
      Returns an array containing the constants of this enum class, in the order they are declared.
      Returns:
      an array containing the constants of this enum class, in the order they are declared
    • valueOf

      public static GradientOptimiser valueOf(String name)
      Returns the enum constant of this class with the specified name. The string must match exactly an identifier used to declare an enum constant in this class. (Extraneous whitespace characters are not permitted.)
      Parameters:
      name - the name of the enum constant to be returned.
      Returns:
      the enum constant with the specified name
      Throws:
      IllegalArgumentException - if this enum class has no constant with the specified name
      NullPointerException - if the argument is null
    • getParameterNames

      public Set<String> getParameterNames()
      An unmodifiable view of the parameter names used by this gradient optimiser.
      Returns:
      The parameter names.
    • validateParamNames

      public boolean validateParamNames(Set<String> paramNames)
      Checks that the parameter names in the supplied set are an exact match for the parameter names that this gradient optimiser expects.
      Parameters:
      paramNames - The gradient optimiser parameter names.
      Returns:
      True if the two sets intersection and union are equal.
    • applyOptimiser

      public <T extends org.tensorflow.types.family.TNumber> org.tensorflow.op.Op applyOptimiser(org.tensorflow.Graph graph, org.tensorflow.Operand<T> loss, Map<String,Float> optimiserParams)
      Applies the optimiser to the graph and returns the optimiser step operation.
      Type Parameters:
      T - The loss type (most of the time this will be TFloat32.
      Parameters:
      graph - The graph to optimise.
      loss - The loss to minimise.
      optimiserParams - The optimiser parameters.
      Returns:
      The optimiser step operation.