Wandb
Integration with
First thing first, you need to install wandb with
Create a free account then run
in your terminal. Follow the link to get an API token that you will need to paste, then you’re all set!
Saves model topology, losses & metrics
Optionally logs weights and or gradients depending on log
(can be “gradients”, “parameters”, “all” or None), sample predictions if log_preds=True
that will come from valid_dl
or a random sample pf the validation set (determined by seed
). n_preds
are logged in this case.
Datasets can also be tracked:
log_dataset
can explicitly be set to the folder to track- the name of the dataset can explicitly be given through
dataset_name
, otherwise it is set to the folder name
For custom scenarios, you can also manually use functions and log_model
to respectively log your own datasets and models.
Learner.gather_args
[source]
Gather config parameters accessible to the learner
log_dataset
[source]
Log dataset folder
log_model
[source]
Example of use:
Once your have defined your Learner
, before you call to fit
or fit_one_cycle
, you need to initialize wandb:
To use Weights & Biases without an account, you can call wandb.init(anonymous='allow')
.
Then you add the callback to your or call to fit
methods, potentially with SaveModelCallback
if you want to save the best model:
Datasets and models can be tracked through the callback or directly through and functions.
For more details, refer to .