Skip to content

設定

設定クラスを提供します。

MLPConfig

MLPConfig dataclass

MLPConfig(hidden_dim, n_layers, output_activation, linear_cfg)

Multi-layer perceptron configuration.

Attributes:

Name Type Description
hidden_dim int

Number of hidden units.

n_layers int

Number of layers.

output_activation str

Activation function for output layer.

linear_cfg LinearConfig

Linear layer configuration.

Attributes

hidden_dim instance-attribute

hidden_dim

linear_cfg instance-attribute

linear_cfg

n_layers instance-attribute

n_layers

output_activation instance-attribute

output_activation

Functions

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for MLPConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `MLPConfig`."""
    self.linear_cfg.dictcfg2dict()
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

LinearConfig

LinearConfig dataclass

LinearConfig(activation, norm='none', norm_cfg=dict(), dropout=0.0, norm_first=False, bias=True)

A linear layer configuration.

Attributes:

Name Type Description
activation str

Activation function.

norm Literal['layer', 'rms', 'none']

Normalization layer. If it's set to "none", normalization is not applied. Default is "none".

norm_cfg dict

Normalization layer configuration. Default is {}.

dropout float

Dropout rate. If it's set to 0.0, dropout is not applied. Default is 0.0.

norm_first bool

Whether to apply normalization before linear layer. Default is False.

bias bool

Whether to use bias. Default is True.

Attributes

activation instance-attribute

activation

bias class-attribute instance-attribute

bias = True

dropout class-attribute instance-attribute

dropout = 0.0

norm class-attribute instance-attribute

norm = 'none'

norm_cfg class-attribute instance-attribute

norm_cfg = field(default_factory=dict)

norm_first class-attribute instance-attribute

norm_first = False

Functions

__post_init__

__post_init__()

Set norm_cfg.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Set `norm_cfg`."""
    if self.norm == "none":
        self.norm_cfg = {}
    else:
        self.norm_cfg = dict(self.norm_cfg)

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for LinearConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `LinearConfig`."""
    self.norm_cfg = dict(self.norm_cfg)
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

ConvConfig

ConvConfig dataclass

ConvConfig(activation, kernel_size, stride, padding, output_padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', dropout=0.0, norm='none', norm_cfg=dict(), norm_first=False, scale_factor=0)

A convolutional layer configuration.

Attributes:

Name Type Description
activation str

Activation function.

kernel_size int

Kernel size.

stride int

Stride.

padding int

Padding.

output_padding int

Output padding, especially for transposed convolution. Default is 0.

dilation int

Dilation.

groups int

Number of groups. Default is 1. See https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html.

bias bool

Whether to use bias. Default is True.

dropout float

Dropout rate. If it's set to 0.0, dropout is not applied. Default is 0.0.

norm Literal['batch', 'group', 'none']

Normalization layer. If it's set to "none", normalization is not applied. Default is "none".

norm_cfg dict

Normalization layer configuration. If you want to use Instance, Layer, or Group normalization, set norm to "group" and set norm_cfg with "num_groups=$in_channel, 1, or any value". Default is {}.

scale_factor int

Scale factor for upsample, especially for PixelShuffle or PixelUnshuffle. If it's set to >0, upsample is applied. If it's set to <0 downsample is applied. Otherwise, no upsample or downsample is applied. Default is 0.

Attributes

activation instance-attribute

activation

bias class-attribute instance-attribute

bias = True

dilation class-attribute instance-attribute

dilation = 1

dropout class-attribute instance-attribute

dropout = 0.0

groups class-attribute instance-attribute

groups = 1

kernel_size instance-attribute

kernel_size

norm class-attribute instance-attribute

norm = 'none'

norm_cfg class-attribute instance-attribute

norm_cfg = field(default_factory=dict)

norm_first class-attribute instance-attribute

norm_first = False

output_padding class-attribute instance-attribute

output_padding = 0

padding instance-attribute

padding

padding_mode class-attribute instance-attribute

padding_mode = 'zeros'

scale_factor class-attribute instance-attribute

scale_factor = 0

stride instance-attribute

stride

Functions

__post_init__

__post_init__()

Set .norm_cfg.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Set `.norm_cfg`."""
    if self.norm == "none":
        self.norm_cfg = {}
    else:
        self.norm_cfg = dict(self.norm_cfg)

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for ConvConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `ConvConfig`."""
    self.norm_cfg = dict(self.norm_cfg)

    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

ConvNetConfig

ConvNetConfig dataclass

ConvNetConfig(channels, conv_cfgs, attention=None, init_channel=16)

Convolutional neural network layers configuration.

Attributes:

Name Type Description
channels Tuple[int, ...]

Number of channels for each layer.

conv_cfgs Tuple[ConvConfig, ...]

Convolutional layer configurations. The length of conv_cfgs should be the same as the length of channels.

init_channel int

Initial number of channels, especially for transposed convolution.

Attributes

attention class-attribute instance-attribute

attention = None

channels instance-attribute

channels

conv_cfgs instance-attribute

conv_cfgs

init_channel class-attribute instance-attribute

init_channel = 16

Functions

__post_init__

__post_init__()

Set channels and conv_cfgs as tuple.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Set `channels` and `conv_cfgs` as tuple."""
    self.conv_cfgs = tuple(self.conv_cfgs)
    self.channels = tuple(self.channels)

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for ConvNetConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `ConvNetConfig`."""
    self.channels = tuple(self.channels)
    conv_cfgs = []
    for cfg_item in self.conv_cfgs:
        conv_cfg = cfg_item
        if isinstance(conv_cfg, DictConfig):
            conv_cfg_dict = convert_dictconfig_to_dict(conv_cfg)
            conv_cfg = ConvConfig(**conv_cfg_dict)
        conv_cfg.dictcfg2dict()
        conv_cfgs.append(conv_cfg)
    self.conv_cfgs = tuple(conv_cfgs)
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

ResNetConfig

ResNetConfig dataclass

ResNetConfig(conv_channel, conv_kernel, f_kernel, conv_activation, out_activation, n_res_blocks, scale_factor=2, n_scaling=2, norm='none', norm_cfg=dict(), dropout=0.0, init_channel=16, padding_mode='zeros', attention=None)

Residual neural network layers configuration.

Attributes:

Name Type Description
conv_channel int

Number of channels for convolutional layer. In ResNet, common number of channels is used for all layers.

conv_kernel int

Kernel size for convolutional layer. In ResNet, common kernel size is used for all layers.

f_kernel int

Kernel size for final or first convolutional layer. This depends on whether PixelShuffle or PixelUnshuffle is used.

conv_activation str

Activation function for convolutional layer.

out_activation str

Activation function for output layer.

n_res_blocks int

Number of residual blocks.

scale_factor int

Scale factor for upsample, especially for PixelShuffle or PixelUnshuffle.

n_scaling int

Number of upsample or downsample layers. The image size is scaled by scalefactor^n_scaling.

norm Literal['batch', 'group', 'none']

Normalization layer. If it's set to "none", normalization is not applied. Default is "none".

norm_cfg dict

Normalization layer configuration. If you want to use Instance, Layer, or Group normalization, set norm to "group" and set norm_cfg with "num_groups=$in_channel, 1, or any value". Default is {}.

dropout float

Dropout rate. If it's set to 0.0, dropout is not applied. Default is 0.0.

init_channel int

Initial number of channels, especially for decoder.

Attributes

attention class-attribute instance-attribute

attention = None

conv_activation instance-attribute

conv_activation

conv_channel instance-attribute

conv_channel

conv_kernel instance-attribute

conv_kernel

dropout class-attribute instance-attribute

dropout = 0.0

f_kernel instance-attribute

f_kernel

init_channel class-attribute instance-attribute

init_channel = 16

n_res_blocks instance-attribute

n_res_blocks

n_scaling class-attribute instance-attribute

n_scaling = 2

norm class-attribute instance-attribute

norm = 'none'

norm_cfg class-attribute instance-attribute

norm_cfg = field(default_factory=dict)

out_activation instance-attribute

out_activation

padding_mode class-attribute instance-attribute

padding_mode = 'zeros'

scale_factor class-attribute instance-attribute

scale_factor = 2

Functions

__post_init__

__post_init__()

Set norm_cfg.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Set `norm_cfg`."""
    if self.norm == "none":
        self.norm_cfg = {}
    else:
        self.norm_cfg = dict(self.norm_cfg)

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for ResNetConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `ResNetConfig`."""
    self.norm_cfg = dict(self.norm_cfg)
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

EncoderConfig

EncoderConfig dataclass

EncoderConfig(backbone, full_connection)

Encoder configuration.

Attributes:

Name Type Description
backbone Union[ConvNetConfig, ResNetConfig]

Backbone configuration.

full_connection Union[MLPConfig, LinearConfig, SpatialSoftmaxConfig]

Full connection configuration.

Attributes

backbone instance-attribute

backbone

full_connection instance-attribute

full_connection

Functions

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for EncoderConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `EncoderConfig`."""
    if hasattr(self.backbone, "dictcfg2dict"):
        self.backbone.dictcfg2dict()
    if hasattr(self.full_connection, "dictcfg2dict"):
        self.full_connection.dictcfg2dict()
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

DecoderConfig

DecoderConfig dataclass

DecoderConfig(backbone, full_connection)

Decoder configuration.

Attributes:

Name Type Description
backbone Union[ConvNetConfig, ResNetConfig]

Backbone configuration.

full_connection Union[MLPConfig, LinearConfig, SpatialSoftmaxConfig]

Full connection configuration.

Attributes

backbone instance-attribute

backbone

full_connection instance-attribute

full_connection

Functions

dictcfg2dict

dictcfg2dict()

Convert dictConfig to dict for DecoderConfig.

Source code in src/ml_networks/config.py
def dictcfg2dict(self) -> None:
    """Convert dictConfig to dict for `DecoderConfig`."""
    self.backbone.dictcfg2dict()
    if hasattr(self.full_connection, "dictcfg2dict"):
        self.full_connection.dictcfg2dict()
    for key, value in self.__dict__.items():
        if isinstance(value, DictConfig | ListConfig | list | tuple | dict):
            setattr(self, key, convert_dictconfig_to_dict(value))

ViTConfig

ViTConfig dataclass

ViTConfig(patch_size, transformer_cfg, cls_token=True, init_channel=16, unpatchify=False)

Vision Transformer configuration.

Attributes:

Name Type Description
patch_size int

Patch size.

transformer_cfg TransformerConfig

Transformer configuration.

cls_token bool

Whether to use class token. Default is True.

init_channel int

Initial number of channels. Default is 16.

Attributes

cls_token class-attribute instance-attribute

cls_token = True

init_channel class-attribute instance-attribute

init_channel = 16

patch_size instance-attribute

patch_size

transformer_cfg instance-attribute

transformer_cfg

unpatchify class-attribute instance-attribute

unpatchify = False

Functions

UNetConfig

UNetConfig dataclass

UNetConfig(channels, conv_cfg, cond_pred_scale=False, nhead=None, has_attn=False, use_shuffle=False, use_hypernet=False, hyper_mlp_cfg=None)

UNet configuration.

Attributes:

Name Type Description
channels Tuple[int, ...]

Number of channels for each layer.

conv_cfg ConvConfig

Convolutional layer configuration.

cond_cfg MLPConfig

Conditional configuration for UNet.

cond_pred_scale bool

Whether to scale the conditional prediction. Default is False.

nhead Optional[int]

Number of heads for attention mechanism. If it's set to None, attention is not applied. Default is None.

has_attn bool

Whether to use attention mechanism. Default is False.

use_shuffle bool

Whether to use PixelShuffle or PixelUnshuffle. Default is False.

use_hypernet bool

Whether to use hypernetwork. Default is False.

hyper_mlp_cfg Optional[MLPConfig]

Hypernetwork configuration. If it's set to None, hypernetwork is not used. Default is None.

Attributes

channels instance-attribute

channels

cond_pred_scale class-attribute instance-attribute

cond_pred_scale = False

conv_cfg instance-attribute

conv_cfg

has_attn class-attribute instance-attribute

has_attn = False

hyper_mlp_cfg class-attribute instance-attribute

hyper_mlp_cfg = None

nhead class-attribute instance-attribute

nhead = None

use_hypernet class-attribute instance-attribute

use_hypernet = False

use_shuffle class-attribute instance-attribute

use_shuffle = False

Functions

__post_init__

__post_init__()

Post-initialization processing.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Post-initialization processing."""
    if self.has_attn:
        assert self.nhead is not None, "nhead must be specified when has_attn is True."
    if isinstance(self.channels, list | ListConfig):
        self.channels = tuple(self.channels)

SpatialSoftmaxConfig

SpatialSoftmaxConfig dataclass

SpatialSoftmaxConfig(temperature=1.0, eps=1e-06, is_argmax=False, is_straight_through=False, additional_layer=None)

Spatial softmax configuration.

Attributes:

Name Type Description
temperature float

Softmax temperature. If it's set to 0.0, the layer outputs the coordinates of the maximum value. Otherwise, the layer outputs the expectation of the coordinates with softmax function. Default is 0.0.

eps float

Epsilon value for numerical stability in softmax. Default is 1e-6.

is_argmax bool

Whether to use argmax instead of softmax. Default is False.

is_straight_through bool

Whether to use straight-through estimator for backpropagation. Default is False.

additional_layer Optional[Union[MLPConfig, LinearConfig]]

Additional layer configuration. If it's set to None, no additional layer is applied. Default is None.

Attributes

additional_layer class-attribute instance-attribute

additional_layer = None

eps class-attribute instance-attribute

eps = 1e-06

is_argmax class-attribute instance-attribute

is_argmax = False

is_straight_through class-attribute instance-attribute

is_straight_through = False

temperature class-attribute instance-attribute

temperature = 1.0

Functions

AttentionConfig

AttentionConfig dataclass

AttentionConfig(nhead, patch_size)

Attention configuration.

Attributes:

Name Type Description
nhead int

Number of heads.

patch_size int

Patch size.

Attributes

nhead instance-attribute

nhead

patch_size instance-attribute

patch_size

Functions

TransformerConfig

TransformerConfig dataclass

TransformerConfig(d_model, nhead, dim_ff, n_layers, dropout=0.1, hidden_activation='GELU', output_activation='GeLU')

Transformer configuration.

Attributes:

Name Type Description
d_model int

Dimension of model.

nhead int

Number of heads.

dim_ff int

Dimension of feedforward network.

n_layers int

Number of layers.

dropout float

Dropout rate. Default is 0.1.

hidden_activation Literal['ReLU', 'GELU']

Activation function for hidden layer. Default is "GELU".

output_activation str

Activation function for output layer. Default is "GeLU".

Attributes

d_model instance-attribute

d_model

dim_ff instance-attribute

dim_ff

dropout class-attribute instance-attribute

dropout = 0.1

hidden_activation class-attribute instance-attribute

hidden_activation = 'GELU'

n_layers instance-attribute

n_layers

nhead instance-attribute

nhead

output_activation class-attribute instance-attribute

output_activation = 'GeLU'

Functions

AdaptiveAveragePoolingConfig

AdaptiveAveragePoolingConfig dataclass

AdaptiveAveragePoolingConfig(output_size=(1, 1), additional_layer=None)

Adaptive average pooling configuration.

Attributes:

Name Type Description
output_size Union[int, Tuple[int, ...]]

Output size of the pooling layer. If it's an integer, it will be used for both height and width. If it's a tuple, it should contain two integers for height and width.

Attributes

additional_layer class-attribute instance-attribute

additional_layer = None

output_size class-attribute instance-attribute

output_size = (1, 1)

Functions

__post_init__

__post_init__()

Ensure output_size is a tuple.

Source code in src/ml_networks/config.py
def __post_init__(self) -> None:
    """Ensure output_size is a tuple."""
    if isinstance(self.output_size, int):
        self.output_size = (self.output_size, self.output_size)
    elif isinstance(self.output_size, list | ListConfig):
        self.output_size = tuple(self.output_size)

SoftmaxTransConfig

SoftmaxTransConfig dataclass

SoftmaxTransConfig(vector, sigma, n_ignore=0, max=1.0, min=-1.0)

Softmax transformation configuration.

Attributes:

Name Type Description
vector int

Vector size.

sigma float

Sigma value.

n_ignore int

Number of ignored elements. Default is 0.

max float

Maximum value. Default is 1.0.

min float

Minimum value. Default is -1.0.

Attributes

max class-attribute instance-attribute

max = 1.0

min class-attribute instance-attribute

min = -1.0

n_ignore class-attribute instance-attribute

n_ignore = 0

sigma instance-attribute

sigma

vector instance-attribute

vector

Functions

ContrastiveLearningConfig

ContrastiveLearningConfig dataclass

ContrastiveLearningConfig(dim_feature, eval_func, dim_input2=None, cross_entropy_like=False)

Contrastive learning configuration.

Attributes

cross_entropy_like class-attribute instance-attribute

cross_entropy_like = False

dim_feature instance-attribute

dim_feature

dim_input2 class-attribute instance-attribute

dim_input2 = None

eval_func instance-attribute

eval_func

Functions