noether.modeling.models

Submodules

Classes

AnchoredBranchedUPT

Implementation of the Anchored Branched UPT model. Including input embedding and output projection, so this is an off-the-shelf model that can be used directly by providing the appropriate input tensors.

Transformer

Implementation of a Transformer model.

UPT

Implementation of the UPT (Universal Physics Transformer) model.

Package Contents

class noether.modeling.models.AnchoredBranchedUPT(config)

Bases: torch.nn.Module

Implementation of the Anchored Branched UPT model. Including input embedding and output projection, so this is an off-the-shelf model that can be used directly by providing the appropriate input tensors.

Parameters:

config (noether.core.schemas.models.AnchorBranchedUPTConfig) – Configuration for the AB-UPT model. See AnchorBranchedUPTConfig for details.

data_specs
rope
pos_embed
encoder
geometry_blocks
surface_bias
volume_bias
num_perceivers = 0
physics_blocks
use_geometry_branch = False
surface_decoder_blocks
volume_decoder_blocks
surface_decoder
volume_decoder
geometry_branch_forward(geometry_position, geometry_supernode_idx, geometry_batch_idx, condition, geometry_attn_kwargs)

Forward pass through the geometry branch of the model.

Parameters:
Return type:

torch.Tensor

physics_blocks_forward(surface_position_all, volume_position_all, geometry_encoding, physics_token_specs, physics_attn_kwargs, physics_perceiver_attn_kwargs, condition)

Forward pass through the physics blocks of the model. Allthough in the AB-UPT paper we only have a perceiver block a the first block, it is possible to have more perceiver blocks in the physics blocks that attend to the geometry encoding.

Parameters:
Return type:

torch.Tensor

decoder_blocks_forward(x_physics, physics_token_specs, surface_token_specs, volume_token_specs, surface_position_all, volume_position_all, surface_decoder_attn_kwargs, volume_decoder_attn_kwargs, condition)

Forward pass through the decoder blocks of the model. We have a separate decoder for surface and volume tokens.

Parameters:
Return type:

tuple[torch.Tensor, torch.Tensor]

create_rope_frequencies(geometry_position, geometry_supernode_idx, surface_position_all, volume_position_all)

Create RoPE frequencies for all relevant positions.

Parameters:
forward(geometry_position, geometry_supernode_idx, geometry_batch_idx, surface_anchor_position, volume_anchor_position, geometry_design_parameters=None, inflow_design_parameters=None, query_surface_position=None, query_volume_position=None)

Forward pass of the AB-UPT model.

Parameters:
  • geometry_position (torch.Tensor) – Coordinates of the geometry mesh. Tensor of shape (B * N_geometry, D_pos), sparse tensor

  • geometry_supernode_idx (torch.Tensor) – Indices of the supernodes for the geometry points. Tensor of shape (B * number of super nodes,)

  • geometry_batch_idx (torch.Tensor | None) – Batch indices for the geometry points. Tensor of shape (B * N_geometry,). If None, assumes all points belong to the same batch.

  • surface_anchor_position (torch.Tensor) – Coordinates of the surface anchor points. Tensor of shape (B, N_surface_anchor, D_pos)

  • volume_anchor_position (torch.Tensor) – Coordinates of the volume anchor points. Tensor of shape (B, N_volume_anchor, D_pos)

  • geometry_design_parameters (torch.Tensor | None) – Design parameters related to the geometry to condition on. Tensor of shape (B, D_geom)

  • inflow_design_parameters (torch.Tensor | None) – Design parameters related to the inflow to condition on. Tensor of shape (B, D_inflow).

  • query_surface_position (torch.Tensor | None) – Coordinates of the query surface points.

  • query_volume_position (torch.Tensor | None) – Coordinates of the query volume points.

Returns:

A dictionary containing the predictions for surface and volume fields, sliced according to the data specifications.

Return type:

dict[str, torch.Tensor]

class noether.modeling.models.Transformer(config)

Bases: torch.nn.Module

Implementation of a Transformer model.

Parameters:

config (noether.core.schemas.models.TransformerConfig) – Configuration of the Transformer model.

blocks
forward(x, attn_kwargs)

Forward pass of the Transformer model.

Parameters:
  • x (torch.Tensor) – Input tensor of shape (batch_size, seq_len, hidden_dim).

  • attn_kwargs (dict[str, torch.Tensor]) – Additional arguments for the attention mechanism.

Returns:

Output tensor after processing through the Transformer model.

Return type:

torch.Tensor

class noether.modeling.models.UPT(config)

Bases: torch.nn.Module

Implementation of the UPT (Universal Physics Transformer) model.

Parameters:

config (noether.core.schemas.models.UPTConfig) – Configuration for the UPT model. See UPTConfig for details.

encoder
use_rope
pos_embed
approximator_blocks
decoder
norm
prediction_layer
compute_rope_args(surface_position_batch_idx, surface_position, surface_position_supernode_idx, query_position)

Compute the RoPE frequency arguments for the surface_position and query_position. If we don’t use RoPE, return empty dicts.

Parameters:
Return type:

tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]

forward(surface_position_batch_idx, surface_position_supernode_idx, surface_position, query_position)

Forward pass of the UPT model.

Parameters:
  • surface_position_batch_idx (torch.Tensor) – Batch indices for the surface positions, since the surface positions are a sparse tensor for the supernode pooling.

  • surface_position_supernode_idx (torch.Tensor) – Supernode indices for the surface positions.

  • surface_position (torch.Tensor) – Input coordinates of the surface points.

  • query_position (torch.Tensor) – Input coordinates of the query points.

Returns:

Output tensor containing the predictions for the surface and volume fields, sliced according to the data specifications.

Return type:

torch.Tensor