Should we do learning rate decay for adam optimizer

Neural NetworkTensorflow

Neural Network Problem Overview


I'm training a network for image localization with Adam optimizer, and someone suggest me to use exponential decay. I don't want to try that because Adam optimizer itself decays learning rate. But that guy insists and he said he did that before. So should I do that and is there any theory behind your suggestion?

Neural Network Solutions


Solution 1 - Neural Network

It depends. ADAM updates any parameter with an individual learning rate. This means that every parameter in the network has a specific learning rate associated.

But the single learning rate for each parameter is computed using lambda (the initial learning rate) as an upper limit. This means that every single learning rate can vary from 0 (no update) to lambda (maximum update).

It's true, that the learning rates adapt themselves during training steps, but if you want to be sure that every update step doesn't exceed lambda you can than lower lambda using exponential decay or whatever. It can help to reduce loss during the latest step of training, when the computed loss with the previously associated lambda parameter has stopped to decrease.

Solution 2 - Neural Network

In my experience it usually not necessary to do learning rate decay with Adam optimizer.

The theory is that Adam already handles learning rate optimization (check reference) :

> "We propose Adam, a method for efficient stochastic optimization that > only requires first-order gradients with little memory requirement. > The method computes individual adaptive learning rates for different > parameters from estimates of first and second moments of the > gradients; the name Adam is derived from adaptive moment estimation."

As with any deep learning problem YMMV, one size does not fit all, you should try different approaches and see what works for you, etc. etc.

Solution 3 - Neural Network

Yes, absolutely. From my own experience, it's very useful to Adam with learning rate decay. Without decay, you have to set a very small learning rate so the loss won't begin to diverge after decrease to a point. Here, I post the code to use Adam with learning rate decay using TensorFlow. Hope it is helpful to someone.

decayed_lr = tf.train.exponential_decay(learning_rate,
                                        global_step, 10000,
                                        0.95, staircase=True)
opt = tf.train.AdamOptimizer(decayed_lr, epsilon=adam_epsilon)

Solution 4 - Neural Network

Adam has a single learning rate, but it is a max rate that is adaptive, so I don't think many people using learning rate scheduling with it.

Due to the adaptive nature the default rate is fairly robust, but there may be times when you want to optimize it. What you can do is find an optimal default rate beforehand by starting with a very small rate and increasing it until loss stops decreasing, then look at the slope of the loss curve and pick the learning rate that is associated with the fastest decrease in loss (not the point where loss is actually lowest). Jeremy Howard mentions this in the fast.ai deep learning course and its from the Cyclical Learning Rates paper.

Edit: People have fairly recently started using one-cycle learning rate policies in conjunction with Adam with great results.

Solution 5 - Neural Network

A simple alternative is to increase the batch size. A larger number of samples per update will force the optimizer to be more cautious with the updates. If GPU memory limits the number of samples that can be tracked per update, you may have to resort to CPU and conventional RAM for training, which will obviously further slow down training.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questionmeng linView Question on Stackoverflow
Solution 1 - Neural NetworknessunoView Answer on Stackoverflow
Solution 2 - Neural Networkj314erreView Answer on Stackoverflow
Solution 3 - Neural NetworkWenmin WuView Answer on Stackoverflow
Solution 4 - Neural NetworkAustinView Answer on Stackoverflow
Solution 5 - Neural NetworkScaledLizardView Answer on Stackoverflow