noether.core.schemas.callbacks¶
Attributes¶
Classes¶
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Internal base class for all registry-based configs. |
|
Configuration for the PyTorch profiler callback. |
Module Contents¶
- class noether.core.schemas.callbacks.CallBackBaseConfig(/, **data)¶
Bases:
noether.core.schemas.lib._RegistryBaseInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- id: str | None = None¶
Optional unique identifier for this callback instance. Required when multiple stateful callbacks of the same type exist (e.g., two BestCheckpointCallbacks tracking different metrics). Used as the key when saving/loading callback state dicts to ensure correct matching on resume.
- every_n_epochs: int | None = None¶
Epoch-based interval. Invokes the callback after every n epochs. Mutually exclusive with other intervals.
- every_n_updates: int | None = None¶
Update-based interval. Invokes the callback after every n updates. Mutually exclusive with other intervals.
- every_n_samples: int | None = None¶
Sample-based interval. Invokes the callback after every n samples. Mutually exclusive with other intervals.
- batch_size: int | None = None¶
None (use the same batch_size as for training).
- Type:
Batch size to use for this callback. Default
- model_config¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- validate_callback_frequency()¶
Ensures that exactly one frequency (‘every_n_*’) is specified and that ‘batch_size’ is present if ‘every_n_samples’ is used.
- Return type:
- classmethod check_positive_values(v)¶
Ensures that all integer-based frequency and batch size fields are positive.
- class noether.core.schemas.callbacks.PeriodicDataIteratorCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfig,abc.ABCInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- class noether.core.schemas.callbacks.BestCheckpointCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['BestCheckpointCallback'] = None¶
- tolerances: list[int] | None = None¶
“If provided, this callback will produce multiple best models which differ in the amount of intervals they allow the metric to not improve. For example, tolerance=[5] with every_n_epochs=1 will store a checkpoint where at most 5 epochs have passed until the metric improved. Additionally, the best checkpoint over the whole training will always be stored (i.e., tolerance=infinite). When setting different tolerances, one can evaluate different early stopping configurations with one training run.
- class noether.core.schemas.callbacks.CheckpointCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['CheckpointCallback'] = None¶
- save_weights: bool = None¶
Whether to save the weights of the model every time this callback is invoked. The checkpoint name will contain the training iteration (e.g., epoch/update/sample) at which the checkpoint was saved.
- save_optim: bool = None¶
Whether to save the optimizer state every time this callback is invoked. The checkpoint name will contain the training iteration (e.g., epoch/update/sample) at which the checkpoint was saved.
- save_latest_weights: bool = None¶
Whether to save the latest weights of the model every time this callback is invoked. Note that the latest weights are always overwritten on the next invocation of this callback.
- class noether.core.schemas.callbacks.EmaCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['EmaCallback'] = None¶
- model_paths: list[str | None] | None = None¶
The paths to the models to apply the EMA to (i.e., composite_model.encoder/composite_model.decoder, path of the PyTorch nn.Modules in the checkpoint). If None, the EMA is applied to the whole model. When training with a CompositeModel, the paths on the submodules (i.e., ‘encoder’, ‘decoder’, etc.) should be provided via this field, otherwise the EMA will be applied to the CompositeModel as a whole which is not possible to restore later on.
- save_last_weights: bool = None¶
Save the weights of the model when training is over (i.e., at the end of training, save the EMA weights).
- save_latest_weights: bool = None¶
Save the latest EMA weights. Note that the latest weights are always overwritten on the next invocation of this callback.
- eval_callbacks: list[Annotated[Any, Discriminated(CallBackBaseConfig)]] | None = None¶
Optional nested periodic callbacks to run against EMA weights. Each child retains its own schedule (
every_n_epochsetc.); the EMA callback swaps its stored EMA parameters into the live model around eval-time hooks (after_epoch,after_update,at_eval) and restores the live weights on exit. Children are dispatched once pertarget_factorand their metric keys are automatically prefixed withema=<factor>/to avoid collisions with live-model metrics. Note:before_trainingandafter_trainingare forwarded without swapping, so EMA initialization and the final save see live weights.
- class noether.core.schemas.callbacks.OnlineLossCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['OnlineLossCallback'] = None¶
- class noether.core.schemas.callbacks.BestMetricCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['BestMetricCallback'] = None¶
The metric to use to dermine whether the current model obtained a new best (e.g., loss/valid/total)
- class noether.core.schemas.callbacks.TrackAdditionalOutputsCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['TrackAdditionalOutputsCallback'] = None¶
- keys: list[str] | None = None¶
List of keys to track in the additional_outputs of the TrainerResult returned by the trainer’s update step.
- patterns: list[str] | None = None¶
List of patterns to track in the additional_outputs of the TrainerResult returned by the trainer’s update step. Matched if it is contained in one of the update_outputs keys.
- reduce: Literal['mean', 'last'] = None¶
The reduction method to be applied to the tracked values to reduce to scalar. Currently supports ‘mean’ and ‘last’.
- class noether.core.schemas.callbacks.OfflineLossCallbackConfig(/, **data)¶
Bases:
PeriodicDataIteratorCallbackConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['OfflineLossCallback'] = None¶
- class noether.core.schemas.callbacks.MetricEarlyStopperConfig(/, **data)¶
Bases:
CallBackBaseConfigInternal base class for all registry-based configs.
Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- name: Literal['MetricEarlyStopper'] = None¶
- class noether.core.schemas.callbacks.FixedEarlyStopperConfig(/, **data)¶
Bases:
pydantic.BaseModel- Parameters:
data (Any)
- name: Literal['FixedEarlyStopper'] = None¶
- validate_callback_frequency()¶
Ensures that exactly one stop (‘stop_at_*’) is specified
- Return type:
- class noether.core.schemas.callbacks.PyTorchProfilerCallbackConfig(/, **data)¶
Bases:
CallBackBaseConfigConfiguration for the PyTorch profiler callback.
The profiler uses
torch.profiler.profilewith a scheduled trace. Profiling is driven off oftrack_after_update_stephooks, i.e. the profiler is stepped once per optimizer update. The resulting traces are written to<run_output_path>/profilerand can be opened in TensorBoard or chrome://tracing.Recommended usage: limit training with
trainer.max_updatesto a value slightly larger thanwait + warmup + active(timesrepeatif > 1).Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- repeat: int = None¶
Number of times the (wait, warmup, active) cycle is repeated. 0 means repeat indefinitely.
- with_stack: bool = None¶
Whether to record Python call stacks for each op (can add significant overhead).
- profile_cuda: bool = None¶
Whether to profile CUDA operations. If False, only CPU operations are profiled.
- profile_cpu: bool = None¶
Whether to profile CPU operations. If False, only CUDA operations are profiled.
- noether.core.schemas.callbacks.CallbacksConfig¶