noether.core.distributed¶
Submodules¶
Classes¶
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Package Contents¶
- noether.core.distributed.barrier()¶
- noether.core.distributed.get_local_rank()¶
- noether.core.distributed.get_managed_rank()¶
- noether.core.distributed.get_managed_world_size()¶
- noether.core.distributed.get_num_nodes()¶
- noether.core.distributed.get_rank()¶
- noether.core.distributed.get_world_size()¶
- noether.core.distributed.is_data_rank0()¶
- noether.core.distributed.is_distributed()¶
- noether.core.distributed.is_local_rank0()¶
- noether.core.distributed.is_managed()¶
- noether.core.distributed.is_rank0()¶
- noether.core.distributed.set_config(new_config)¶
- Parameters:
new_config (DistributedConfig)
- noether.core.distributed.all_gather_grad(x, batch_dim=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.all_gather_nograd(x, batch_dim=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.all_gather_nograd_clipped(x, max_length=None, batch_dim=0)¶
- Parameters:
x (torch.Tensor)
max_length (int | None)
- Return type:
- noether.core.distributed.all_reduce_mean_grad(x)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.all_reduce_mean_nograd(x)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.all_reduce_sum_grad(x)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.all_reduce_sum_nograd(x)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.reduce_max_grad(x, dest_rank=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.reduce_max_nograd(x, dest_rank=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.reduce_mean_grad(x, dest_rank=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.reduce_mean_nograd(x, dest_rank=0)¶
- Parameters:
x (torch.Tensor)
- Return type:
- noether.core.distributed.run(main, devices=None, accelerator='gpu', master_port=None)¶
- noether.core.distributed.run_managed(main, accelerator='gpu', devices=None)¶
- noether.core.distributed.run_unmanaged(main, devices, accelerator='gpu', master_port=None)¶
- noether.core.distributed.accelerator_to_device(accelerator)¶
- noether.core.distributed.check_single_device_visible(accelerator)¶
- noether.core.distributed.log_device_info(accelerator, device_ids)¶