callbacks.lr_finder

Learning rate finder plots lr vs loss relationship for a . The idea is to reduce the amount of guesswork on picking a good starting learning rate.

Overview:

  1. First run lr_find learn.lr_find()
  2. Plot the learning rate vs loss learn.recorder.plot()
  3. Pick a learning rate before it diverges then start training

Technical Details: (first described by Leslie Smith)

For a more intuitive explanation, please check out

First we run this command to launch the search:

lr_find[test]

lr_find(learn:Learner, start_lr:Floats=1e-07, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, wd:float=None) Tests found for lr_find:

  • pytest -sv tests/test_train.py::test_lr_find
  • pytest -sv tests/test_vision_train.py::test_lrfind [source]

To run tests please refer to this .

Explore lr from to end_lr over num_it iterations in learn. If stop_div, stops when loss diverges.

  1. learn.lr_find(stop_div=False, num_it=200)
  1. LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
  1. learn.recorder.plot()

Then, we choose a value that is approximately in the middle of the sharpest downward slope. This is given as an indication by the LR Finder tool, so let’s try 1e-2.

Don’t just pick the minimum value from the plot!

  1. learn = simple_learner()

Picking a value before the downward slope results in slow training:

  1. learn = simple_learner()
  2. simple_learner().fit(2, 1e-3)

Suggested LR

If you pass suggestion=True in learn.recorder.plot, you will see the point where the gardient is the steepest with a
red dot on the graph. We can use that point as a first guess for an LR.

  1. learn.lr_find(stop_div=False, num_it=200)
  1. learn.recorder.plot(suggestion=True)
  1. Min numerical gradient: 5.25E-03

LRFinder - 图2

You can access the corresponding learning rate like this:

  1. min_grad_lr = learn.recorder.min_grad_lr
  2. min_grad_lr
  1. learn = simple_learner()
  2. simple_learner().fit(2, min_grad_lr)

Causes learn to go on a mock training from start_lr to end_lr for num_it iterations.

on_train_begin[source][test]

on_train_begin(pbar, **kwargs:Any) No tests found for on_train_begin. To contribute a test please refer to and this discussion.

Initialize optimizer and learner hyperparameters.

on_batch_end[source][test]

Determine if loss has runaway and we should stop.

on_epoch_end[source][test]

on_epoch_end(**kwargs:Any) No tests found for on_epoch_end. To contribute a test please refer to and this discussion.

Called at the end of an epoch.

on_train_end[source][test]

Cleanup learn model weights disturbed during LRFinder exploration.