The Truth of Sisyphus
  • Introduction
  • Deep Learning
    • Basics
      • Hinge Loss
      • Regularizations
      • Linear Classification
      • Multi-Class and Cross Entropy Loss
      • Batch Norm and other Normalizations
      • Optimization
      • Optimization Functions
      • Convolution im2col
      • Activation Functions
      • Derivatives
        • Derivatives of Softmax
        • A Smooth (differentiable) Max Function
      • Model Ensemble
      • Layers Python Implementation
    • Classification
      • Mobile friendly networks
      • Non-local Neural Networks
      • Squeeze-and-Excitation Networks
      • Further Attention Utilization -- Efficience & Segmentation
      • Group Norm
      • ShuffleNet V2
    • Segmentation
      • Several Instance Segmentation
      • A Peek at Semantic Segmentation
      • Design Choices for Mobile Friendly Deep Learning Models, Semantic Segmentation
      • Efficient Video Object Segmentation via Network Modulation
      • BiSeNet
      • DeepLabV3+
    • Detection
      • CornerNet
      • IoU-Net
      • Why smooth L1 is popular in BBox Regression
      • MTCNN-NCNN
      • DetNet
      • SSD Illustration
    • RNN Related
      • GRU vs LSTM
      • BERT
    • Reinforcement Learning
      • AutoML in Practice Review
      • DRL for optimal execution of profolio transaction
    • Multi-task
      • Multi-task Overview
      • What are the tricks in Multi-Task network design?
    • Neural Network Interpretation
      • Neuron Visualization
    • Deep Learning Frameworks
      • How does Caffe work
      • [Gluon] When to use (Hybrid)Sequential and (Hybrid)Block
      • Gluon Hybrid Intro
      • Gluon HybridBlocks Walk-Through
      • A quick tour of Torch internals
      • NCHW / NHWC in Pytorch
      • Static & Dynamic Computation Graph
    • Converting Between DL Frameworks
      • Things To Be Considered When Doing Model Converting
      • Caffe to TensorFlow
    • Computation Graph Optimization
      • Two ways of TensorRT to optimize Neural Network Computation Graph
      • Customized Caffe Memory Optimization
      • NCNN Memory Optimization
      • Symbolic Programs Advantages: More Efficient, Reuse Intermediate Memory, Operation Folding
    • Deep Learning Debug
      • Problems caused by dead ReLU
      • Loss jumps to 87.3365
      • Common Causes of NANs During Training
    • Deployment
      • Efficient Convolution Operation
      • Quantization
    • What I read recently
      • Know Google the Paper Way
      • ECCV 2018
      • Neural Machine Translation
      • Street View OCR Extraction System
      • Teaching Machines to Draw
      • Pixel to Graph
      • Burst Image Deblurring
      • Material for Masses
      • Learning to Separate Object Sounds by Watching Unlabeled Video
    • Papers / Posts to be read
    • Dummy thoughts
  • Machine Learning
    • Classification
    • Regression
    • Clustering
    • Dimension Reduction
    • Metrics
    • Regularization
    • Bayesian Example
    • Machine Learning System Design
    • Recommendation
    • Essentials of Machine Learning
    • Linear Regression
    • Logistic Regression
      • Logistic Function
    • Gaussian Discriminant Analysis
    • Naive Bayes
    • SVM
    • MLE vs MAP
    • Boosting
    • Frequent Questions
    • Conclusion of Machine Learning
  • Python notes
    • Python _ or __ underscores usage
    • Python Multiprocess and Threading Differences
    • Heapq vs. Q.PriorityQueue
    • Python decorator
    • Understanding Python super()
    • @ property
    • Python __all__
    • Is Python List a Linked List or Array
    • What is the "u" in u'Hello world'
    • Python "self"
    • Python object and class
    • Python Class' Instance method, Class method, and Static Methods Demystified
    • Python WTF
    • Python find first value index in a list: [list].index(val)
    • Sort tuples, and lambda usecase
    • Reverse order of range()
    • Python check list is empty
    • Python get ASCII value from character
    • An A-Z of useful Python tricks
    • Python nested function variable scope
    • Python reverse a list
    • Python priority queue -- heapq
  • C++ Notes
    • Templates
    • std::string (C++) and char* (or c-string "string" for C)
    • C++ printf and cout
    • Class Member Function
    • Inline
    • Scope Resolution Operator ::
    • Constructor
    • Destructor
    • Garbage Collection is Critical
    • C++ Question Lists
  • Operating System
    • Basics
    • Mutex & Semaphore
    • Ticket Selling System
    • OS and Memory
    • Sort implementation in STL
    • Compile, link, loading & run
    • How to understand Multithreading and Multiprocessing from the view of Operating System
  • Linux & Productivity
    • Jupyter Notebook on Remote Server
    • Nividia-smi monitoring
  • Leetcode Notes
    • Array
      • 11. Container With Most Water
      • 35. Search Insert Position
    • Linked List
      • Difference between Linked List and Array
      • Linked List Insert
      • Design of Linked List
      • Two Pointers
        • 141. Linked List Cycle
        • 142. Linked List Cycle II
        • 160. Intersection of two Linked List
        • 19. Remove N-th node from the end of linked list
      • 206. Reverse Linked List
      • 203. Remove Linked List Elements
      • 328. Odd Even Linked List
      • 234. Palindrome Linked List
      • 21. Merge Two Sorted Lists
      • 430. Flatten a Multilevel Doubly Linked List
      • 430. Flatten a Multilevel Doubly Linked List
      • 708. Insert into a Cyclic Sorted List
      • 138. Copy List with Random Pointer
      • 61. Rotate List
    • Binary Tree
      • 144. Binary Tree Preorder Traversal
      • 94. Binary Tree Iterative In-order Traverse
    • Binary Search Tree
      • 98. Validate Binary Search Tree
      • 285. Inorder Successor in BST
      • 173. Binary Search Tree Iterator
      • 700. Search in a Binary Search Tree
      • 450. Delete Node in a BST
      • 701. Insert into a Binary Search Tree
      • Kth Largest Element in a Stream
      • Lowest Common Ancestor of a BST
      • Contain Duplicate III
      • Balanced BST
      • Convert Sorted Array to Binary Search Tree
    • Dynamic Programming
      • 198. House Robber
      • House Robber II
      • Unique Path
      • Unique Path II
      • Best time to buy and sell
      • Partition equal subset sum
      • Target Sum
      • Burst Ballons
    • DFS
      • Clone Graph
      • General Introduction
      • Array & String
      • Sliding Window
  • Quotes
    • Concert Violinist Joke
    • 船 Ship
    • What I cannot create, I do not understand
    • Set your course by the stars
    • To-do list
Powered by GitBook
On this page
  • Define Task and Preprocess Data
  • Fast and Accurate Backbone
  • Segmentation Structure
  1. Deep Learning
  2. Segmentation

Design Choices for Mobile Friendly Deep Learning Models, Semantic Segmentation

Define Task and Preprocess Data

  • Define a general task and find as many labeled GT as possible.

  • Find a specific task which might be a part of the original task. This might only have 1/ 10 GTs. But collect tons of specific scenario images, w.o. GTs.

  • Train SOTA performance model, might be very computation redundant, on the general GTs.

  • Use the SOTA model to do inference on the collected scenario-specific images.

  • Post process and manually select SOTA inference results as GT' to train small models.

  • Do hard negative mining on both image level and pixel level.

Fast and Accurate Backbone

Here I will use semantic segmentation as an example, but we can treat the backbone part as a feature encoder, most of techniques here can be also utilized in other tasks, for example DetNet using dilated convolution as a better detection backbone

  • Large kernels / Fast downsampling at the beginning.

  • keep at lest 1 / 16 resolution for most of the rest part.

  • At the end of the backbone, go deeper with lower resolution to capture better global information. A good take-home is 1 / 64, and 5 - 10 layers deeper.

  • Use Hybrid Dense Convolution (dilation rate 1, 2, 5 sequentially) to eliminate grid pattern.

  • When having same FLOP, keep MAC as low as possible. Take-home: same input / output channels.

  • Doing convolution on half of channels and then shuffling is good enough.

  • Train on larger classification dataset than ImageNet will boost the performance with no computation complexity increased.

Segmentation Structure

  • SPP is faster than ASPP, especially when being directly used after DRN.

  • Since SPP only needs pooled feature maps, use deepest part of the backbone, which has been introduced above, as SPP input and deepest stride 16 part as base feature map to be refined.

  • Using low-level features always boosts the performance, but the lower we used, the larger the computation complexity it is. Take-home: 1 / 4 is enough for most realtime cases.

  • Both DeepLab V3+ and UNet use low-level features, but the former one works far better on general cases. Reasons are complicated:

    • better final backbone output for V3+;

    • multi-scale information fusion from ASPP;

    • bilinear interpolation performs similar to Deconvolution on most cases but runs more efficiently;

    • better training strategy, large batch size to accumulate BN means and variances and fix BN to train convolutions.

  • Bilinear interpolation works good on small objects but not as good as Deconv on large ones and binary classes.

  • Also use depth-wise + point-wise convolutions in decoder, or shuffle unit in decoder.

  • Squeeze and excitation channel attention could be used after SPP. Skip + sigmoid works better than sigmoid only.

  • S and E channel attention doesn't cost too much flops but it does need some parameters. Further efficient attention method could be used, like CBAM.

  • Use cross-entropy + L2 regression + dice loss for binary tasks.

  • Post Processing:

    • CRITICAL!

    • Keep it steady is important to human visual system.

    • Try as many traditional image processing methods as possible. Ez tricks might make a total difference.

  • Transferred to other tasks:

    • Before the final segmentation classifier, the whole structure descripted above is a feature encoder. Treat it as a FPN and do detection should also work.

PreviousA Peek at Semantic SegmentationNextEfficient Video Object Segmentation via Network Modulation

Last updated 6 years ago