その他¶
その他のクラスと関数を提供します。
BaseModule¶
PyTorch LightningのLightningModuleを拡張した基底クラスです。
BaseModule ¶
HyperNet¶
あるネットワーク(HyperNet)が別のネットワーク(TargetNet)の重みを動的に生成するメタ学習モジュールです。
HyperNet ¶
Bases: LightningModule, HyperNetMixin
A hypernetwork that generates weights for a target network. Shape is dict[str, Tuple[int, ...]].
Args: input_dim: The dimension of the input. output_shapes: The shapes of the primary network weights being predicted. mlp_cfg: Configuration for the MLP backbone. encoding: The input encoding mode. Defaults to None.
Examples:
>>> from ml_networks.config import MLPConfig, LinearConfig
>>> input_dim = 10
>>> cond_dim = 128
>>> target_net = nn.Linear(10, 5)
>>> output_params = target_net.state_dict()
>>> mlp_cfg = MLPConfig(
... hidden_dim=64,
... n_layers=2,
... output_activation="ReLU",
... linear_cfg=LinearConfig(
... activation="ReLU",
... )
... )
>>> net = HyperNet(cond_dim, output_params, mlp_cfg)
>>> condition = torch.randn(2, cond_dim)
>>> x = torch.randn(2, input_dim)
>>> param = net(condition)
>>> outputs = torch.func.functional_call(target_net, param, x)
>>> outputs.shape
torch.Size([2, 5])
Source code in src/ml_networks/torch/hypernetworks.py
Attributes¶
Functions¶
forward ¶
Perform a forward pass of the neural network.
Args: inputs: The input tensors.
Returns:
| Type | Description |
|---|---|
A dictionary of output tensors.
|
|
Examples:
>>> from ml_networks.config import MLPConfig, LinearConfig
>>> input_dim = 10
>>> output_shapes = {"weight": (5, 10), "bias": (5,)}
>>> mlp_cfg = MLPConfig(
... hidden_dim=64,
... n_layers=2,
... output_activation="ReLU",
... linear_cfg=LinearConfig(
... activation="ReLU",
... norm="none",
... dropout=0.0,
... bias=True
... )
... )
>>> net = HyperNet(input_dim, output_shapes, mlp_cfg)
>>> x = torch.randn(2, input_dim)
>>> outputs = net(x)
>>> outputs["weight"].shape
torch.Size([2, 5, 10])
>>> outputs["bias"].shape
torch.Size([2, 5])
Source code in src/ml_networks/torch/hypernetworks.py
ContrastiveLearningLoss¶
対照学習(Contrastive Learning)のための損失関数モジュールです。
ContrastiveLearningLoss ¶
Bases: LightningModule
Contrastive learning module.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cfg
|
ContrastiveLearningConfig
|
Configuration for contrastive learning. |
required |
Examples:
>>> from ml_networks.config import ContrastiveLearningConfig, MLPConfig, LinearConfig
>>> cfg = ContrastiveLearningConfig(
... dim_feature=128,
... dim_input1=256,
... dim_input2=256,
... eval_func=MLPConfig(
... hidden_dim=256,
... n_layers=2,
... output_activation="ReLU",
... linear_cfg=LinearConfig(
... activation="ReLU",
... norm="layer",
... norm_cfg={"eps": 1e-05, "elementwise_affine": True, "bias": True},
... dropout=0.1,
... norm_first=False,
... bias=True
... )
... ),
... cross_entropy_like=False
... )
>>> model = ContrastiveLearningLoss(cfg)
>>> x1 = torch.randn(2, 256)
>>> x2 = torch.randn(2, 256)
>>> output = model.calc_nce(x1, x2)
>>> output["nce"].shape
torch.Size([])
>>> output, embeddings = model.calc_nce(x1, x2, return_emb=True)
>>> embeddings[0].shape, embeddings[1].shape
(torch.Size([2, 128]), torch.Size([2, 128]))
Source code in src/ml_networks/torch/contrastive.py
Attributes¶
Functions¶
calc_nce ¶
Calculate the Noise Contrastive Estimation (NCE) loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
feature1
|
Tensor
|
First input tensor of shape (*, dim_input1) |
required |
feature2
|
Tensor
|
Second input tensor of shape (*, dim_input2) |
required |
return_emb
|
bool
|
Whether to return embeddings, by default False |
False
|
Returns:
| Type | Description |
|---|---|
Union[dict[str, Tensor], Tuple[dict[str, Tensor], Tuple[Tensor, Tensor]]]
|
If return_emb is False, returns loss dictionary. If return_emb is True, returns (loss dictionary, (embeddings1, embeddings2)) |
Source code in src/ml_networks/torch/contrastive.py
calc_sigmoid ¶
Calculate the Sigmoid loss for contrastive learning.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
feature1
|
Tensor
|
First input tensor of shape (*, dim_input1) |
required |
feature2
|
Tensor
|
Second input tensor of shape (*, dim_input2) |
required |
return_emb
|
bool
|
Whether to return embeddings, by default False |
False
|
Returns:
| Type | Description |
|---|---|
Union[dict[str, Tensor], Tuple[dict[str, Tensor], Tuple[Tensor, Tensor]]]
|
If return_emb is False, returns loss dictionary. If return_emb is True, returns (loss dictionary, (embeddings1, embeddings2)) |
Source code in src/ml_networks/torch/contrastive.py
calc_timeseries_nce ¶
calc_timeseries_nce(feature1, feature2, positive_range_self=0, positive_range_tgt=0, return_emb=False)
Calculate the Noise Contrastive Estimation (NCE) loss for time series data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
feature1
|
Tensor
|
First input tensor of shape (*batch, length, dim_input1) |
required |
feature2
|
Tensor
|
Second input tensor of shape (*batch, length, dim_input2) |
required |
positive_range_self
|
int
|
Range for self-positive samples, by default 0 |
0
|
positive_range_tgt
|
int
|
Range for target-positive samples, by default 0 |
0
|
return_emb
|
bool
|
Whether to return embeddings, by default False |
False
|
Returns:
| Type | Description |
|---|---|
Union[dict[str, Tensor], Tuple[dict[str, Tensor], Tuple[Tensor, Tensor]]]
|
If return_emb is False, returns loss dictionary. If return_emb is True, returns (loss dictionary, (embeddings1, embeddings2)) |
Source code in src/ml_networks/torch/contrastive.py
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | |
ProgressBarCallback¶
PyTorch LightningのRichプログレスバーコールバックです。
ProgressBarCallback ¶
Bases: RichProgressBar
Make the progress bar richer.
References
- https://qiita.com/akihironitta/items/edfd6b29dfb67b17fb00
Rich progress bar with custom theme.