callbacks

    fastai’s training loop is highly extensible, with a rich callback system. See the callback docs if you’re interested in writing your own callback. See below for a list of callbacks that are provided with fastai, grouped by the module they’re defined in.

    Every callback that is passed to with the callback_fns parameter will be automatically stored as an attribute. The attribute name is snake-cased, so for instance ActivationStats will appear as (assuming your object is named learn).

    This sub-package contains more sophisticated callbacks that each are in their own module. They are (click the link for more details):

    Use Leslie Smith’s to find a good learning rate for training your model. Let’s see an example of use on the MNIST dataset with a simple CNN.

    The fastai librairy already has a Learner method called lr_find that uses to plot the loss as a function of the learning rate

    1. learn.lr_find()
    1. LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
    1. learn.recorder.plot()

    In this example, a learning rate around 2e-2 seems like the right fit.

    1. lr = 2e-2

    Train with Leslie Smith’s 1cycle annealing method. Let’s train our simple learner using the one cycle policy.

    Total time: 00:07

    The learning rate and the momentum were changed during the epochs as follows (more info on the ).

    1. learn.recorder.plot_lr(show_moms=True)

    Overview - 图2

    1. learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy]).mixup()

    Log the results of training in a csv file. Simply pass the CSVLogger callback to the Learner.

    1. learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy, error_rate], callback_fns=[CSVLogger])

      Total time: 00:07

      You can then read the csv.

      Create your own multi-stage annealing schemes with a convenient API. To illustrate, let’s implement a 2 phase schedule.

      1. def fit_odd_shedule(learn, lr):
      2. n = len(learn.data.train_dl)
      3. phases = [TrainingPhase(n).schedule_hp('lr', lr, anneal=annealing_cos),
      4. TrainingPhase(n*2).schedule_hp('lr', lr, anneal=annealing_poly(2))]
      5. sched = GeneralScheduler(learn, phases)
      6. learn.callbacks.append(sched)
      7. total_epochs = 3
      1. learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)
      2. fit_odd_shedule(learn, 1e-3)

      Total time: 00:07

      1. learn.recorder.plot_lr()

      Use fp16 to take advantage of tensor cores on recent NVIDIA GPUs for a 200% or more speedup.

      Convenient wrapper for registering and automatically deregistering . Also contains pre-defined hook callback: ActivationStats.

      RNNTrainer

      Callback taking care of all the tweaks to train an RNN.

      TerminateOnNaNCallback

      Stop training if the loss reaches NaN.

      EarlyStoppingCallback

      Stop training if a given metric/validation loss doesn’t improve.

      SaveModelCallback

      1. learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)
      2. learn.fit_one_cycle(3,1e-4, callbacks=[SaveModelCallback(learn, every='epoch', monitor='accuracy')])

      Total time: 00:07

      1. best.pth bestmodel_2.pth model_1.pth model_4.pth stage-1.pth
      2. bestmodel_1.pth model_0.pth model_3.pth one_epoch.pth trained_model.pth

      ReduceLROnPlateauCallback

      Reduce the learning rate each time a given metric/validation loss doesn’t improve by a certain factor.

      GPU and general RAM profiling callback

      StopAfterNBatches

      Stop training after n batches of the first epoch.

      LearnerTensorboardWriter

      Broadly useful callback for Learners that writes to Tensorboard. Writes model histograms, losses/metrics, embedding projector and gradient stats.

      Recorder

      Track per-batch and per-epoch smoothed losses and metrics.

      ShowGraph

      Dynamically display a learning chart during training.

      BnFreeze

      Freeze batchnorm layer moving average statistics for non-trainable layers.

      Clips gradient during training.


      Company logo