Mobile friendly networks
Last updated
Last updated
Depth-wise and point-wise separated.
Interpretation:
The basic idea is to replace a full convolutional operator with a factorized version that splits convolution into two separate layers. The first layer is called a depthwise convolution, it performs lightweight filtering by applying a single convolutional filter per input channel. The second layer is a 1 × 1 convolution, called a pointwise convolution, which is responsible for building new features through computing linear combinations of the input channels.
Depth-wise and point-wise separated
skip connection
group convolution in point-wise (group convolution: ensuring that each convolution operates only on the corresponding input channel group)
channel shuffle after 1 x 1 point-wise group convolution
8 x downsample in first two convolutions (conv with stride2 + max pooling + conv with stride 2)
Interpretation:
For each residual unit in ResNeXt, the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy.
To address the issue, a straightforward solution is to apply group convolutions also on 1 × 1 layers, which will significantly reduce computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels
By implementing a channel shuffling operation, obtaining input data from different groups is allowed and then the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups.
For ReLU, deep networks only have the power of a linear classifier on the non-zero volume part of the output domain. [本身用 ReLU 的有效区域就是 linear 的 ]
On the other hand, when ReLU collapses the channel (< 0), it inevitably loses information in that channel. [ < 0 区域的信息会丢失 ]
Though traditionally networks have lots of channels, even some channels have lost information, which might still be preserved in the other channels. [ 是因为一般 channel 数都很大,所以一部分丢失了也没关系。可当 inverted residual 在使用时,bottle neck 部分的 channel 数本身就很小,如果再丢失,会损失精度。 激活函数在高维空间能够有效的增加非线性,而在低维空间时则会破坏特征,不如线性的效果好。由于第二个 pointwise 的主要功能就是降维,因此按照上面的理论,降维之后就不宜再使用 ReLU 了 ]
The total amount of memory would be dominated by the size of bottleneck tensors, rather than the size of tensors that are internal to bottleneck (and much larger). For residual connection structure, the amount of memory is simply the maximum total size of combined inputs and outputs across all operations. So that bottleneck with expansion is very memory efficient in inference part, since it has thin bottleneck.
Equal channel width minimizes memory access cost (MAC).
Excessive group convolution increases MAC.
Network fragmentation reduces degree of parallelism.
Element-wise operations are non-negligible.