Biscuits of Deep Learning

Manipulate Gradient

The section includes some technologies related to gradient descent optimization method.


Gradient Clipping

A good explanation: What is Gradient Clipping?
Related paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity (ICLR’2020)
Sometimes the training loss may not stable, it may caused by exploding problem. A simple yet effective way is to use the Gradient Clipping method.

Gradient clipping

Implemented by PyTorch [document]

# inside the training loop
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm_value) # add this line
optimizer.step()

Gradient Centralization (ECCV’20)

Paper: Gradient Centralization: A New Optimization Technique for Deep Neural Networks
It normalizes the gradient to zero mean that can speed up the training process and increase the generalization ability (see the repository).

Gradient Centralization


Gradient Flooding (ICML’20)

Paper: Do We Need Zero Training Loss After Achieving Zero Training Error?
It sets a threshold for the training loss. If the loss is lower than the threshold, the method will penalize the overflowing value to avoid overfitting.

gradient flooding

Just adding one line of code (PyTorch) to implement it:

outputs = model(inputs)
loss = criterion(outputs, labels)
flood = (loss-b).abs()+b # This is it!
optimizer.zerograd()
flood.backward()
optimizer.step()

Enhance the Basic Operation

This section focuses on the basic operations in CNNs, e.g., convolutional layer, pooling layer, etc.


Anti-aliasing (ICML’19)

Paper: Making Convolutional Networks Shift-Invariant Again [code]
Down-sampling feature map with stride convolution and pooling process usually causes alising problem. The paper presents a solution to remit the problem.

## installation (bash)
pip install antialiased-cnns

## usage (python)
C = 10 # example feature channel size
blurpool = antialiased_cnns.BlurPool(C, filt_size=4, stride=2) # BlurPool layer; use to downsample a feature map
ex_tens = torch.Tensor(1,C,128,128)
print blurpool(ex_tens).shape # 1xCx64x64 tensor

Learning Rate

This section includes some useful strategy of learning rate. For example, the scheduler, optimizer, etc.


Cosine Scheduler (ICLR’17)

Paper: SGDR: Stochastic Gradient Descent with Warm Restarts
The learning rate is designed with cosine annealing strategy. An implementation can be found at Link that is original from GitHub.

Usage:

# warm_up_with_cosine_lr
model = ...
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-5) # lr is min lr
scheduler = CosineAnnealingWarmUpRestarts(optimizer, T_0=250, T_mult=2, eta_max=0.1, T_up=50)
for epoch in range(n_epoch):
    train()
    valid()
    scheduler.step()

An example: CosineAnnealingWarmUpRestarts(optimizer, T_0=150, T_mult=1, eta_max=0.1, T_up=10, gamma=0.5)

No Comments

Send Comment Edit Comment


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
Previous
Next