noether.core.schemas.models.ab_upt

Classes

AnchorBranchedUPTConfig

Internal base class for all registry-based configs.

Module Contents

class noether.core.schemas.models.ab_upt.AnchorBranchedUPTConfig(/, **data)

Bases: noether.core.schemas.models.base.ModelBaseConfig, noether.core.schemas.mixins.InjectSharedFieldFromParentMixin

Internal base class for all registry-based configs.

Provides auto-registration via __init_subclass__. Not meant to be used directly - use specific config base classes instead.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

supernode_pooling_config: Annotated[noether.core.schemas.modules.encoders.SupernodePoolingConfig, noether.core.schemas.mixins.Shared] | None = None
transformer_block_config: Annotated[noether.core.schemas.modules.blocks.TransformerBlockConfig, noether.core.schemas.mixins.Shared]
geometry_depth: int = None

Number of transformer blocks in the geometry encoder.

hidden_dim: int = None

Hidden dimension of the model.

physics_blocks: list[Literal['self', 'shared', 'cross', 'joint', 'perceiver']]

Types of physics blocks to use in the model. Options are “self”, “cross”, “joint”, and “perceiver”. Self: Self-attention within a branch. Attention weights are shared between all domains. Cross: Cross-attention between domains. Each domain attends to all other domains’ anchors. Joint: Joint attention over all domain points. Full self-attention over all points. Perceiver: Perceiver-style cross-attention to geometry encoding.

Note: “shared” is a deprecated alias for “self” and will be removed in a future release.

num_domain_decoder_blocks: dict[str, int]

2, “volume”: 2}.

Type:

Number of final domain-specific decoder blocks with self attention and no weight sharing, e.g. {“surface”

init_weights: noether.core.types.InitWeightsMode = None

Weight initialization of linear layers. Defaults to “truncnormal002”.

drop_path_rate: float = None

Drop path rate for stochastic depth. Defaults to 0.0 (no drop path).

data_specs: noether.core.schemas.dataset.ModelDataSpecs

Data specifications for the model.

migrate_shared_to_self()

Migrate deprecated ‘shared’ block type to ‘self’.

Return type:

AnchorBranchedUPTConfig

set_condition_dim()

Set condition_dim in transformer_block_config based on data_specs.

Return type:

AnchorBranchedUPTConfig

rope_frequency_config()
Return type:

noether.core.schemas.modules.layers.RopeFrequencyConfig

pos_embed_config()
Return type:

noether.core.schemas.modules.layers.ContinuousSincosEmbeddingConfig

bias_mlp_config()
Return type:

noether.core.schemas.modules.mlp.MLPConfig

perceiver_block_config()
Return type:

noether.core.schemas.modules.blocks.PerceiverBlockConfig

domain_decoder_configs()

Per-domain decoder projection configs, keyed by domain name.

Return type:

dict[str, noether.core.schemas.modules.layers.LinearProjectionConfig]

validate_parameters()

Validate validity of parameters across the model and its submodules.

Ensures that hidden_dim is consistent across parent and all submodules. Note: transformer_block_config validates hidden_dim % num_heads == 0 in its own validator.

Return type:

AnchorBranchedUPTConfig