DANet: Dual Attention Network for Scene Segmentatio
Abstract The paper introduces a position attention module and a channel attention module to capture global dependencies in the spatial and channel dimensions respectively. The proposed DANet adaptively integrates local semantic features using the self-attention mechanism. 摘要 本文引入了位置关注模块和通道关注模块,分别在空间和通道维度上捕捉全局依赖性。 所提出的DANet利用自注意力机制自适应地集成局部语义特征。 Outline Brief Review: attention mechanism, SE net DANet: Dual Attention Network…
Executable Python Program in Windows
Background Recently, I have worked on a demo program that works on the Windows OS. However, most of my recent works are based on the python language. I really like the simplicity of the language, and don't want to go back to use the C++ and MFC. Therefore, in this…
Biscuits of Deep Learning
Manipulate Gradient The section includes some technologies related to gradient descent optimization method. Gradient Clipping A good explanation: What is Gradient Clipping? Related paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity (ICLR'2020) Sometimes the training loss may not stable, it may caused by exploding problem. A simple…
[Revised] Basic Theory of Covolutional Neural Network
Outline In this presentation, we present the DNN and CNN structure We briefly shows some famous CNN works (structure, improvements, performance) Difficulty of training DNN and shows a good solution (ResNet) Revised Version [2021-06-25] A revised version can be download from Download_Link Slides [embeddoc url="https://liwen.site/wp-content/uploads/2020/07/Deep-Learning-2-CNN_V12.pptx" download="all" viewer="microsoft"]
[Revised] Basic Theory of Neural Network
Content In the document, we present: Each neural network (NN) consists of many neurons, and each neuron has two elements: linear function and activation function. We have shown the reason of using activation functions. (choices of activation functions) Two numerical examples have been given to track the forward and backward…
Neural Style Transfer via Meta Networks
Outline Introduction Style transfer Content-perceptual loss Style-perceptual loss Example Proposed Method (simple but inspirational) Arbitrary style transfer Meta networks for arbitrary style transfer Experiments Conclusion Slides [pdf-embedder url="https://liwen.site/wp-content/uploads/2020/06/20200116_Meta-Network-for-Neural-Style-Transfer.pdf" title="20200116_Meta Network for Neural Style Transfer"]
Experiment Control
Prepare the environment of the experiment control Tutorial from https://shenxiaohai.me/2019/01/17/sacred-tool/ Installation To install Sacred at client (t.g. conda environment) pip install sacred pip install numpy pymongo Server: database # 1. Import the public key used by the package management system. wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add - #…
Useful tools
Auto notification of modification of a web page Select an area and relax: It will send an email alert when something changes. Just from: https://visualping.io/
Sentences in Paper Writing
Feature Vocabulary: boost the representation power A Feature aggregation strategy is proposed to propagate information from early stags to the later ones. -- (Li, et al. 2019) “Retinking on Multi-Stage Networks for Huma Pose Estimation” A multi-stage network is vulnerable by the information losing during repeated up and down sampling.…