Skip to content

Commit

Permalink
improve config file for training mnist on 14*14.
Browse files Browse the repository at this point in the history
  • Loading branch information
YalcinerMustafa committed Oct 18, 2024
1 parent 65c9d7b commit 9457ff9
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions experiments/mnist/mnist_0_scaled_14_linf_lognormal_gpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,15 @@ experiments:
device: *device
scale_factor: 2
epochs: &epochs 200000
patience: &patience 40
patience: &patience 50
batch_size: &batch_size
__eval__: tune.choice([16, 32, 64])
optim_cfg: &optim
optimizer:
__class__: torch.optim.Adam
params:
lr:
__eval__: tune.loguniform(1e-6, 1e-4)
__eval__: tune.loguniform(1e-7, 1e-4)
weight_decay: 0.0
model_cfg:
type:
Expand All @@ -53,7 +53,7 @@ experiments:
coupling_layers: &coupling_layers
__eval__: tune.choice([i for i in range(3, 4)])
coupling_nn_layers: &coupling_nn_layers
__eval__: "tune.choice([[w] * l for l in [1, 2, 3] for w in [196, 392]])" # tune.choice([[c*32, c*16, c*8, c*16, c*32] for c in [1, 2, 3, 4]] + [[c*64, c*32, c*64] for c in range(1,5)] + [[c*128] * 2 for c in range(1,5)] + [[c*256] for c in range(1,5)])
__eval__: "tune.choice([[w] * l for l in [1, 2, 3, 4] for w in [196, 392, 588]])"
nonlinearity: &nonlinearity
__eval__: tune.choice([torch.nn.ReLU()])
split_dim: 98
Expand Down

0 comments on commit 9457ff9

Please sign in to comment.