noether.core.callbacks.online.best_metric¶
Classes¶
A callback that keeps track of the best metric value over a training run for a certain metric (i.e., source_metric_key) while also logging one or more target metrics. |
Module Contents¶
- class noether.core.callbacks.online.best_metric.BestMetricCallback(callback_config, **kwargs)¶
Bases:
noether.core.callbacks.periodic.PeriodicCallbackA callback that keeps track of the best metric value over a training run for a certain metric (i.e., source_metric_key) while also logging one or more target metrics.
For example, track the test loss the epoch with the best validation loss to simulate early stopping.
Example config:
- kind: noether.core.callbacks.BestMetricCallback every_n_epochs: 1 source_metric_key: loss/val/total target_metric_keys: - loss/test/total
In this example, whenever a new best validation loss is found, the corresponding test loss is logged under the key
loss/test/total/at_best/loss/val/total.- Parameters:
callback_config (noether.core.schemas.callbacks.BestMetricCallbackConfig) – Configuration for the callback. See
BestMetricCallbackConfigfor available options including source and target metric keys.**kwargs – Additional keyword arguments provided to the parent class.
- source_metric_key¶
- target_metric_keys¶
- optional_target_metric_keys¶
- higher_is_better¶
- best_metric_value¶
- periodic_callback(**__)¶
Hook called periodically based on the configured intervals.
This method is the primary entry point for periodic actions in subclasses. It is triggered when any of the configured intervals (
every_n_epochs,every_n_updates, orevery_n_samples) are reached.Subclasses should override this method to implement periodic logic such as:
Calculating and logging expensive validation metrics
Saving specific model checkpoints or artifacts
Visualizing training progress (e.g., plotting samples)
Adjusting training hyperparameters or model state
Note
This method is executed within a
torch.no_grad()context.- Parameters:
interval_type – “epoch”, “update”, “sample” or “eval” indicating which interval triggered this callback.
update_counter –
UpdateCounterinstance providing details about the current training progress (epoch, update, sample counts).**kwargs – Additional keyword arguments passed from the triggering hook (e.g., from
after_epoch()orafter_update()).
- Return type:
None