ユーティリティ¶
ユーティリティ関数を提供します。
共通ユーティリティ (ml_networks.utils)¶
save_blosc2¶
save_blosc2 ¶
Save numpy array with blosc2 compression.
Args:
path : str Path to save. x : np.ndarray Numpy array to save.
Examples:
Source code in src/ml_networks/utils.py
load_blosc2¶
load_blosc2 ¶
Load numpy array with blosc2 compression.
Args:
path : str Path to load.
Returns:
| Type | Description |
|---|---|
ndarray
|
Numpy array. |
Examples:
Source code in src/ml_networks/utils.py
determine_loader¶
determine_loader ¶
Determine DataLoader with fixed seed.
Args:
data : Dataset Dataset to load. seed : int Random seed. batch_size : int Batch size. shuffle : bool Whether to shuffle data. Default is True. collate_fn : callable Collate function. Default is None.
Returns:
| Type | Description |
|---|---|
DataLoader
|
DataLoader with fixed seed. |
Source code in src/ml_networks/utils.py
seed_worker¶
seed_worker ¶
DataLoaderのworkerの固定.
Dataloaderの乱数固定にはgeneratorの固定も必要らしい
conv_out¶
畳み込み層の出力サイズを計算します。
conv_out ¶
Calculate the output size of convolutional layer.
Args:
h_in : int Input size. padding : int Padding size. kernel_size : int Kernel size. stride : int Stride size. dilation : int Dilation size. Default is 1.
Returns:
| Type | Description |
|---|---|
int
|
Output size. |
Examples:
Source code in src/ml_networks/utils.py
conv_transpose_out¶
転置畳み込み層の出力サイズを計算します。
conv_transpose_out ¶
Calculate the output size of transposed convolutional layer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
h_in
|
int
|
Input size. |
required |
padding
|
int
|
Padding size. |
required |
kernel_size
|
int
|
Kernel size. |
required |
stride
|
int
|
Stride size. |
required |
dilation
|
int
|
Dilation size. Default is 1. |
1
|
output_padding
|
int
|
Output padding size. Default is 0. |
0
|
Returns:
| Type | Description |
|---|---|
int
|
Output size. |
Examples:
>>> conv_transpose_out(32, 1, 3, 1)
32
>>> conv_transpose_out(32, 1, 3, 2)
63
>>> conv_transpose_out(32, 1, 3, 1, 2)
34
>>> conv_transpose_out(32, 1, 3, 1, 1, 1)
33
Source code in src/ml_networks/utils.py
output_padding¶
転置畳み込み層の出力パディングを計算します。
output_padding ¶
Calculate the output padding size of transposed convolutional layer.
Args:
h_in : int Input size. h_out : int Output size. padding : int Padding size. kernel_size : int Kernel size. stride : int Stride size. dilation : int Dilation size. Default is 1.
Returns:
| Type | Description |
|---|---|
int
|
Output padding size. |
Examples:
>>> output_padding(32, 32, 1, 3, 1)
0
>>> output_padding(32, 16, 1, 3, 2)
1
>>> output_padding(32, 30, 1, 3, 1, 2)
0
Source code in src/ml_networks/utils.py
PyTorch固有ユーティリティ (ml_networks.torch.torch_utils)¶
get_optimizer¶
get_optimizer ¶
Get optimizer from torch.optim or pytorch_optimizer.
Args:
param : Iterator[nn.Parameter] Parameters of models to optimize. name : str Optimizer name. kwargs : dict Optimizer arguments(settings).
Returns:
| Type | Description |
|---|---|
Optimizer
|
|
Examples:
>>> get_optimizer([nn.Parameter(torch.randn(1, 3))], "Adam", lr=0.01)
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
lr: 0.01
maximize: False
weight_decay: 0
)
Source code in src/ml_networks/torch/torch_utils.py
torch_fix_seed¶
torch_fix_seed ¶
乱数を固定する関数.
References
- https://qiita.com/north_redwing/items/1e153139125d37829d2d
Source code in src/ml_networks/torch/torch_utils.py
gumbel_softmax¶
gumbel_softmax ¶
Gumbel softmax function with temperature. This prevents overflow and underflow.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
Tensor
|
Input tensor. |
required |
dim
|
int
|
Dimension to apply softmax. |
required |
temperature
|
float
|
Temperature. Default is 1.0. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Gumbel softmaxed tensor. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the gumbel_softmax is inf or nan. |
Source code in src/ml_networks/torch/torch_utils.py
softmax¶
softmax ¶
Softmax function with temperature. This prevents overflow and underflow.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
Tensor
|
Input tensor. |
required |
dim
|
int
|
Dimension to apply softmax. |
required |
temperature
|
float
|
Temperature. Default is 1.0. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Softmaxed tensor. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the softmax is inf or nan. |
Source code in src/ml_networks/torch/torch_utils.py
MinMaxNormalize¶
MinMaxNormalize ¶
Bases: Normalize
MinMax 正規化変換.
MinMaxNormalize の初期化.
Args:
min_val : float 最小値. max_val : float 最大値. old_min : float 元の最小値. old_max : float 元の最大値.
Source code in src/ml_networks/torch/torch_utils.py
SoftmaxTransformation¶
SoftmaxTransformation ¶
Softmax 変換クラス.
SoftmaxTransformation の初期化.
Args:
cfg : SoftmaxTransConfig SoftmaxTransformation の設定.
Source code in src/ml_networks/torch/torch_utils.py
Attributes¶
Functions¶
__call__ ¶
get_transformed_dim ¶
inverse ¶
SoftmaxTransformation の逆変換.
Args:
x : torch.Tensor 入力テンソル.
Returns:
| Type | Description |
|---|---|
Tensor
|
出力テンソル. |
Source code in src/ml_networks/torch/torch_utils.py
transform ¶
SoftmaxTransformation の実行.
Args:
x : torch.Tensor 入力テンソル.
Returns:
| Type | Description |
|---|---|
Tensor
|
出力テンソル. |
Examples:
>>> trans = SoftmaxTransformation(SoftmaxTransConfig(vector=16, sigma=0.01, n_ignore=1, min=-1.0, max=1.0))
>>> x = torch.randn(2, 3, 4)
>>> transformed = trans(x)
>>> transformed.shape
torch.Size([2, 3, 49])
>>> trans = SoftmaxTransformation(SoftmaxTransConfig(vector=11, sigma=0.05, n_ignore=0, min=-1.0, max=1.0))
>>> x = torch.randn(2, 3, 4)
>>> transformed = trans(x)
>>> transformed.shape
torch.Size([2, 3, 44])