Python Environment and Basics

System Preparation

Install NVIDIA 2080Ti (driver + cuda)

# if need remove old driver
sudo apt-get purge nvidia*
sudo apt autoremove

#add ppa
sudo add-apt-repository ppa:graphics-drivers
sudo apt-get update
#install driver version 440
sudo apt install nvidia-driver-440
sudo reboot

#if the system is ubuntu 18.  Cuda10.1 
cd ~/Downloads
wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.168_418.67_linux.run
sudo sh cuda_10.1.168_418.67_linux.run
# if need remove cuda
sudo apt-get purge nvidia-cuda-toolkit
sudo apt-get purge --auto-remove nvidia-cuda-toolkit

# for ubuntu 20.04,  you can directly use the following command
sudo apt update
sudo apt install nvidia-cuda-toolkit

Check CUDA version

  • Go to cuda folder

    cd /usr/local/cuda/
  • Open "version.txt"

    vi version.txt
  • Exit
    Press {ESC}, then input {:} {q}. Finally, press {Enter} to exit

    Solve Error: NVCC is not found

    In conda env
    which nvcc
    /path/to/your/miniconda3/envs/[envname]]/bin/

export LD_LIBRARY_PATH=/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin

GPU usage

watch -d -n 0.5 nvidia-smi

to find the processes that use the GPU

sudo fuser -v /dev/nvidia*
# kill it
kill -9 [PID NUMBER]

Control GPU Visibility of PyTorch

CUDA_VISIBLE_DEVICES=1,2 python myscript.py

Check GPU number in python

import torch
torch.cuda.device_count()

Speed up the CPU with high performance mode

The default running mode of the CPU at Ubuntu is designed to balance the energy and performance. It limits the speed of the CPU. To speed up the CPU, it needs to change to the "performance" mode.
Just use (with sudo permission)

echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Check whether all processors are under "performance" mode. Previous they are under "powersave" mode:

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Real-time CPU frequency monitoring

lscpu # check whether the running frequence reaches the maximum value
watch -n 1 "cat /proc/cpuinfo | grep \"^[c]pu MHz\""

Perminatly change to "performance" mode

sudo apt-get install sysfsutils
# edit the file: /etc/sysfs.conf
sudo vim /etc/sysfs.conf

# add the following command
devices/system/cpu/cpu0/cpufreq/scaling_governor = performance

Visualization of training process via Tensorboard

Record loss during training

# if use PyTorch
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir=None,comment=HBPN)

# visualize models
dummy_input = torch.rand(32,3,48,48)
writer.add_graph(Encoder,(dummy_input,))

# record losses at each epoch
writer.add_scalars('Loss/Epoch', {'epoch_loss':epoch_loss,
                                      'epoch_stage_1':epoch_stage_1,
                                      'epoch_stage_2':epoch_stage_2,
                                      }, epoch)
writer.close()
#-----------------------------------------------------------------------

# if use TensorFlow :: revise it later
import tensorflow as tf
writer = tf.summary.FileWriter('/save_path/',graph)

# record losses at each epoch
test_writer.add_summary(summary, i)

Visualization

# open terminal and go to the project directory
tensorboard --logdir=runs
# open the returned link 

Docker

Nvidia CUDA docker labels:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags

# Docker IDE
docker pull portainer/portainer
docker volume create portainer_data
docker run -d -p 9000:9000 --restart=always --name portainer -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

# GUI to outside
sudo nvidia-docker run --rm -ti --ipc=host --volume=$HOME/.Xauthority:/root/.Xauthority:rw --net=host --env=DISPLAY --name Liwen_Demo -v ~/Demo_Liwen:/Demo_Liwen demo_liwen_v1

# save image
docker commit [container]  mymysql:v1
docker save [image] > busybox.tar
docker load < busybox.tar

Virtual Environment

Python Environment: Conda Installation and Basic Usage

# Install Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh

# Create a new environment, activation and deactivation
conda create -n myenv python=3.5
conda activate myenv
conda deactivate

# conda show created environments
conda env list

# In a virtual environment, show the path of python
which python

# List installed packages
conda list

# Remove an environment
conda env remove --name myenv

# export conda environment
conda list --explicit > requirements.txt
## import it to a new host
conda create --name NEW_NAME --file requirements.txt

# output the env's dependency to requirements.txt
pip freeze > requirements.txt
# import it to a new host
pip install -r requirements.txt

# a package that export both conda and pip dependency:

# check: https://gist.github.com/gwerbin/dab3cf5f8db07611c6e0aeec177916d8
# use bash to enther the conda enviroment, it needs init first
#!/bin/bash
export PATH=/root/miniconda3/bin/:$PATH
eval "$(command conda 'shell.bash' 'hook' 2> /dev/null)"

Install Pytorch

Using conda to install the PyTorch directly through the conda source is very slow in Hong Kong. It will be much faster through the pip source.
The following command will create a new environment (python=3.6)and install PyTorch16 with CUDA10.1.

conda create -n pytorch_16 python=3.6
conda activate pytorch_16
conda install cudatoolkit=10.1
pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

Install Tenserflow 1.* with GPU

To take tensorflow 1.14 for example, it relies on cuda=10.0 and cudnn=7.4 (7.6 is ok). For the dependencies of other version of tensorflow, check the link.
Before installation, we need to check whether the GPU driver support cuda 10 (use "nvidia-smi" see the driver version, and supported CUDA).

# Create a new environment, and activate it
conda create -n tf14 python=3.7
conda activate tf14

conda install cudatoolkit=10.0
conda install -c annaconda cudnn
conda install tensorflow-gpu=1.14

# test the GPU visibility
python
import tensorflow as tf
tf.test.is_gpu_available() # to check whether there is a GPU device

Python Packages Control and Basic Usage

Python add Environment Variable

os.environ["VARIABLE_NAME"] = "VALUE"

Install some package

# ImportError: No module named 'past'
pip install future

# Install OpenCV
pip install opencv-python 

# Install skimage
pip install scikit-image

# AttributeError: module 'scipy.misc' has no attribute 'imread'
conda install scipy=1.2 # not support python>=3.7
pip install pillow

Python get current path

# get current path
import os
curPath =  os.getcwd()

# add dictionary to path
import sys
sys.path
sys.path.append('/path/to/the/example_file.py')

make directory if not exist

 import os
 directory =  Floder
 try:
     os.stat(directory)
 except:
     os.mkdir(directory) 

List all files of a folder

import glob

path = 'WHICH_PATH'
files = [f for f in glob.glob(path + "**/*.jpg", recursive=True)]

for f in files:
    print(f)

Download image from url

import urllib.request
urllib.request.urlretrieve("https://i.loli.net/2020/12/14/3dLYfJFhIlyQ2Vo.png", "img.jpg")

Format transfer between: OpenCV and PIL

# OpenCV to PIL
image = Image.fromarray(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))

# PIL to OpenCV
img = cv2.cvtColor(numpy.asarray(image),cv2.COLOR_RGB2BGR) 

PIL image cannot show during debug

Install essential package

sudo apt-get install imagemagick

Training on Multi-GPUs but testing on a single GPU

state_dict = torch.load('myfile.pth.tar')
# create new OrderedDict that does not contain module.
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
    name = k[7:] # remove module.
    new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)

High-light the python output

# install colorama firstly through: pip install colorama
from colorama import init, Fore, Back, Style
        init()
        print(Fore.RED + Back.YELLOW+"HIGH-LIGHTED CONTENT" + Style.RESET_ALL)

Screen: to keep your work running, even stop the ssh connection.

Another choice is the Tmux link

install Screen

For root user

sudo apt-get update
sudo apt-get install screen

For non-root user, it may use conda

conda install -c conda-forge screen

Start a session with session_name

screen -S session_name

Detach from Linux Screen Session: {Ctrl} + {a} + {d}

To find the session ID list the current running screen sessions with:

screen -ls

If you want to restore screen

screen -r [name or id]

kill a screen session with session ID

screen -X -S [session ID] kill

Experiment Control

record the output logs from terminal to "log.txt"

sh tran.sh | tee log.txt

Running time

import time

tStart = time.time()
print("--- %s seconds ---" % (time.time() - tStart))

Another function

import time

def timeit(method):
    """
    Decorator useful for timing functions.
    Source: https://medium.com/pythonhive/python-decorator-to-measure-the-execution-time-of-methods-fa04cb6bb36d

    Usage:
    @timeit
    def my_function():
        pass
    """
    def timed(*args, **kw):
        ts = time.time()
        result = method(*args, **kw)
        te = time.time()

        if 'log_time' in kw:
            name = kw.get('log_name', method.__name__.upper())
            kw['log_time'][name] = int((te - ts) * 1000)
        else:
            print('%r  %2.2f ms' %
                  (method.__name__, (te - ts) * 1000))
        return result

    return timed

Pytorch GPU Control

Control GPU usage. If we have two GPU and want to use them all.

os.environ[‘CUDA_VISIBLE_DEVICES’] = ‘0,1’

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'

Video Technology

Video and Image Sequence Transfer

resize video

ffmpeg -i video1_trim.mp4 -vf scale=1036:540 video_1.mp4

video to image sequence

ffmpeg -i video_1.mp4 -vf fps=30 vd_1/%04d.png
ffmpeg -i video_1.mp4 -ss 00:00:00 -t 00:01:00 -vf fps=30  vd_1/%04d.png # in a specific time range

image sequence to video

cat *.png | ffmpeg -f image2pipe -i - video_1.mp4

OpenCV

Installation

TODO

Text

cv2.putText(img_BGR, "TEXT TO SHOW", (20,40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)

Opencv Read and Write Video

opencv read a video file "Video_Path"

import cv2
video_cap = cv2.VideoCapture("Video_Path")
Frame_Index = -1
while(video_cap.isOpened()):
        ret, frame = video_cap.read()
        if not ret:
            break
        else:
            Frame_Index+=1
        # Start your code
        cv2.putText(frame, "Frame: %4d" % Frame_Index, (20, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                    (255, 255, 255), 2)

opecv write video frame to video file

import cv2

class VideoSaver:
    def __init__(self, save_path, framesize, FPS=30, isSave=True, isColor=True):
        self.isSave = isSave
        if self.isSave:
            fourcc = cv2.VideoWriter_fourcc(*"mp4v")
            self.out = cv2.VideoWriter(save_path, fourcc, FPS, framesize,isColor=isColor)

    def write(self, frame):
        if self.isSave: self.out.write(frame)

    def __del__(self):
        print("killed VideoSaver")
        if self.isSave: self.out.release()

# usage
m_Saver = VideoSaver(save_path="output/output_file.mp4", framesize=(1920,1080), isSave=False)
m_Saver.write(cv2_BRG_frame)

Visualize PyTorch

visualize tensor

def vis(img_tensor):
    from skimage import exposure
    var = img_tensor[0].cpu().permute(1, 2, 0).numpy()
    var = exposure.rescale_intensity(var, out_range=(0, 255)).astype(np.uint8)
    varC = cv2.applyColorMap(var, cv2.COLORMAP_JET)
    cv2.imshow("window",varC)
    cv2.waitKey(0)
No Comments

Send Comment Edit Comment


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
Previous
Next