noether.core.distributed¶
Submodules¶
Classes¶
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Package Contents¶
- noether.core.distributed.barrier()¶
- noether.core.distributed.get_local_rank()¶
- noether.core.distributed.get_managed_rank()¶
- noether.core.distributed.get_managed_world_size()¶
- noether.core.distributed.get_num_nodes()¶
- noether.core.distributed.get_rank()¶
- noether.core.distributed.get_world_size()¶
- noether.core.distributed.is_data_rank0()¶
- noether.core.distributed.is_distributed()¶
- noether.core.distributed.is_local_rank0()¶
- noether.core.distributed.is_managed()¶
- noether.core.distributed.is_rank0()¶
- noether.core.distributed.set_config(new_config)¶
- Parameters:
new_config (DistributedConfig)
- noether.core.distributed.all_gather_grad(x, batch_dim=0)¶
- noether.core.distributed.all_gather_nograd(x)¶
- noether.core.distributed.all_gather_nograd_clipped(x, max_length)¶
- noether.core.distributed.all_reduce_mean_grad(x)¶
- noether.core.distributed.all_reduce_mean_nograd(x)¶
- noether.core.distributed.all_reduce_sum_grad(x)¶
- noether.core.distributed.all_reduce_sum_nograd(x)¶
- noether.core.distributed.run(main, devices=None, accelerator='gpu', master_port=None)¶
- noether.core.distributed.run_managed(main, accelerator='gpu', devices=None)¶
- noether.core.distributed.run_unmanaged(main, devices, accelerator='gpu', master_port=None)¶
- noether.core.distributed.accelerator_to_device(accelerator)¶
- noether.core.distributed.check_single_device_visible(accelerator)¶
- noether.core.distributed.log_device_info(accelerator, device_ids)¶