The Truth of Sisyphus
  • Introduction
  • Deep Learning
    • Basics
      • Hinge Loss
      • Regularizations
      • Linear Classification
      • Multi-Class and Cross Entropy Loss
      • Batch Norm and other Normalizations
      • Optimization
      • Optimization Functions
      • Convolution im2col
      • Activation Functions
      • Derivatives
        • Derivatives of Softmax
        • A Smooth (differentiable) Max Function
      • Model Ensemble
      • Layers Python Implementation
    • Classification
      • Mobile friendly networks
      • Non-local Neural Networks
      • Squeeze-and-Excitation Networks
      • Further Attention Utilization -- Efficience & Segmentation
      • Group Norm
      • ShuffleNet V2
    • Segmentation
      • Several Instance Segmentation
      • A Peek at Semantic Segmentation
      • Design Choices for Mobile Friendly Deep Learning Models, Semantic Segmentation
      • Efficient Video Object Segmentation via Network Modulation
      • BiSeNet
      • DeepLabV3+
    • Detection
      • CornerNet
      • IoU-Net
      • Why smooth L1 is popular in BBox Regression
      • MTCNN-NCNN
      • DetNet
      • SSD Illustration
    • RNN Related
      • GRU vs LSTM
      • BERT
    • Reinforcement Learning
      • AutoML in Practice Review
      • DRL for optimal execution of profolio transaction
    • Multi-task
      • Multi-task Overview
      • What are the tricks in Multi-Task network design?
    • Neural Network Interpretation
      • Neuron Visualization
    • Deep Learning Frameworks
      • How does Caffe work
      • [Gluon] When to use (Hybrid)Sequential and (Hybrid)Block
      • Gluon Hybrid Intro
      • Gluon HybridBlocks Walk-Through
      • A quick tour of Torch internals
      • NCHW / NHWC in Pytorch
      • Static & Dynamic Computation Graph
    • Converting Between DL Frameworks
      • Things To Be Considered When Doing Model Converting
      • Caffe to TensorFlow
    • Computation Graph Optimization
      • Two ways of TensorRT to optimize Neural Network Computation Graph
      • Customized Caffe Memory Optimization
      • NCNN Memory Optimization
      • Symbolic Programs Advantages: More Efficient, Reuse Intermediate Memory, Operation Folding
    • Deep Learning Debug
      • Problems caused by dead ReLU
      • Loss jumps to 87.3365
      • Common Causes of NANs During Training
    • Deployment
      • Efficient Convolution Operation
      • Quantization
    • What I read recently
      • Know Google the Paper Way
      • ECCV 2018
      • Neural Machine Translation
      • Street View OCR Extraction System
      • Teaching Machines to Draw
      • Pixel to Graph
      • Burst Image Deblurring
      • Material for Masses
      • Learning to Separate Object Sounds by Watching Unlabeled Video
    • Papers / Posts to be read
    • Dummy thoughts
  • Machine Learning
    • Classification
    • Regression
    • Clustering
    • Dimension Reduction
    • Metrics
    • Regularization
    • Bayesian Example
    • Machine Learning System Design
    • Recommendation
    • Essentials of Machine Learning
    • Linear Regression
    • Logistic Regression
      • Logistic Function
    • Gaussian Discriminant Analysis
    • Naive Bayes
    • SVM
    • MLE vs MAP
    • Boosting
    • Frequent Questions
    • Conclusion of Machine Learning
  • Python notes
    • Python _ or __ underscores usage
    • Python Multiprocess and Threading Differences
    • Heapq vs. Q.PriorityQueue
    • Python decorator
    • Understanding Python super()
    • @ property
    • Python __all__
    • Is Python List a Linked List or Array
    • What is the "u" in u'Hello world'
    • Python "self"
    • Python object and class
    • Python Class' Instance method, Class method, and Static Methods Demystified
    • Python WTF
    • Python find first value index in a list: [list].index(val)
    • Sort tuples, and lambda usecase
    • Reverse order of range()
    • Python check list is empty
    • Python get ASCII value from character
    • An A-Z of useful Python tricks
    • Python nested function variable scope
    • Python reverse a list
    • Python priority queue -- heapq
  • C++ Notes
    • Templates
    • std::string (C++) and char* (or c-string "string" for C)
    • C++ printf and cout
    • Class Member Function
    • Inline
    • Scope Resolution Operator ::
    • Constructor
    • Destructor
    • Garbage Collection is Critical
    • C++ Question Lists
  • Operating System
    • Basics
    • Mutex & Semaphore
    • Ticket Selling System
    • OS and Memory
    • Sort implementation in STL
    • Compile, link, loading & run
    • How to understand Multithreading and Multiprocessing from the view of Operating System
  • Linux & Productivity
    • Jupyter Notebook on Remote Server
    • Nividia-smi monitoring
  • Leetcode Notes
    • Array
      • 11. Container With Most Water
      • 35. Search Insert Position
    • Linked List
      • Difference between Linked List and Array
      • Linked List Insert
      • Design of Linked List
      • Two Pointers
        • 141. Linked List Cycle
        • 142. Linked List Cycle II
        • 160. Intersection of two Linked List
        • 19. Remove N-th node from the end of linked list
      • 206. Reverse Linked List
      • 203. Remove Linked List Elements
      • 328. Odd Even Linked List
      • 234. Palindrome Linked List
      • 21. Merge Two Sorted Lists
      • 430. Flatten a Multilevel Doubly Linked List
      • 430. Flatten a Multilevel Doubly Linked List
      • 708. Insert into a Cyclic Sorted List
      • 138. Copy List with Random Pointer
      • 61. Rotate List
    • Binary Tree
      • 144. Binary Tree Preorder Traversal
      • 94. Binary Tree Iterative In-order Traverse
    • Binary Search Tree
      • 98. Validate Binary Search Tree
      • 285. Inorder Successor in BST
      • 173. Binary Search Tree Iterator
      • 700. Search in a Binary Search Tree
      • 450. Delete Node in a BST
      • 701. Insert into a Binary Search Tree
      • Kth Largest Element in a Stream
      • Lowest Common Ancestor of a BST
      • Contain Duplicate III
      • Balanced BST
      • Convert Sorted Array to Binary Search Tree
    • Dynamic Programming
      • 198. House Robber
      • House Robber II
      • Unique Path
      • Unique Path II
      • Best time to buy and sell
      • Partition equal subset sum
      • Target Sum
      • Burst Ballons
    • DFS
      • Clone Graph
      • General Introduction
      • Array & String
      • Sliding Window
  • Quotes
    • Concert Violinist Joke
    • 船 Ship
    • What I cannot create, I do not understand
    • Set your course by the stars
    • To-do list
Powered by GitBook
On this page
  1. Deep Learning
  2. Deep Learning Frameworks

How does Caffe work

PreviousDeep Learning FrameworksNext[Gluon] When to use (Hybrid)Sequential and (Hybrid)Block

Last updated 6 years ago

1.Blob

  • Blob是caffe基本的数据结构,用四维矩阵 Batch×Channel×Height×Weight表示,存储了网络的神经元激活值和网络参数,以及相应的梯度(激活值的残差和dW、db)。其中包含有cpu_data、gpu_data、cpu_diff、gpu_diff、mutable_cpu_data、mutable_gpu_data、mutable_cpu_diff、mutable_gpu_diff这一堆很像的东西,分别表示存储在CPU和GPU上的数据(印象中二者的值好像是会自动同步成一致的),其中带data的里面存储的是激活值和W、b,diff中存储的是残差和dW、db,另外带mutable和不带mutable的一对指针所指的位置是相同的,只是不带mutable的只读,而带mutable的可写。

2.Layer

  • Layer代表了神经网络中各种各样的层,组合成一个网络。一般一个图像或样本会从数据层中读进来,然后一层一层的往后传。除了数据层比较特殊之外,其余大部分层都包含4个函数:LayerSetUp、Reshape、Forward、Backward。其中LayerSetup用于初始化层,开辟空间,填充初始值什么的。Reshape是对输入值进行维度变换,比如pooling接全连接层的时候要先拉成一个向量再计算。Forward是前向传播,Backward是后向传播。当然对于我这种喜欢偷懒的童鞋一般学习的时候最喜欢看各种层的Backward函数了,最好是对着公式边推导边看,可以有更直观的理解。那么数据是如何在层之间传递的呢?每一层都会有一个(或多个)Bottom和top,分别存储输入和输出,比如bottom[0]->cpu_data()存输入的神经元激活值,换成top存输出的,换成cpu_diff()存的是激活值的残差,换成gpu是存在GPU上的数据,再带上mutable就可写了,这些是神经元激活值相关的,如果这个层前后有多个输入输出层,就会有bottom[1],比如accuracy_layer就有两个输入,fc8和label。而每层的参数会存在this->blobs_里,一般this->blobs_[0]存W,this->blobs_[1]存b,this->blobs_[0]->cpu_data()存的是W的值,this->blobs_[0]->cpu_diff()存的梯度dW,b和db也类似,然后换成gpu是存在GPU上的数据,再带上mutable就可写了。。(各种变量好多好晕,但愿我说清楚了。。)哦对了,凡是能在GPU上运算的层都会有名字相同的cpp和cu两个文件,cu文件中运算时基本都调用了cuda核函数,可以在math_function.cu中查看。

3.Net

  • Net就是把各种层按train_val.prototxt的定义堆叠在一起,首先进行每个层的初始化,然后不断进行Update,每更新一次就进行一次整体的前向传播和反向传播,然后把每层计算得到的梯度计算进去,完成一次更新,这里注意每层在Backward中只是计算dW和db,而W和b的更新是在Net的Update里最后一起更新的。而且在caffe里训练模型的时候一般会有两个Net,一个train一个test。刚开始训练网络时前面的一大堆输出,网络的结构什么的也都是这里输出的。

4.Solver

  • Solver是按solver.prototxt的参数定义对Net进行训练,首先会初始化一个TrainNet和一个TestNet,然后其中的Step函数会对网络不断进行迭代,主要就是两个步骤反复迭代:①不断利用ComputeUpdateValue计算迭代相关参数,比如计算learning rate,把weight decay加上什么的,②调用Net的Update函数对整个网络进行更新。迭代中的一大堆输出也是在这里输出的,比如当前的loss和learning rate什么的。 综上,为了把整个过程串起来,可以从tools/caffe这个我们最常用的函数入手,训练一个网络然后跟着数据的流动方向看看一个网络是怎么更新的,然后找自己比较感兴趣的地方细看。

5.代码调用逻辑

  • 首先是初始化的过程:train函数先生成solver类的对象,solver对象中有net类的内部变量,所以solver对象生成的同时生成了net类对象,solver对象是根据train函数中传入的prototxt文件来初始化的。同理net类对象里有一个layer类的vector容器,里面实际将会装进各个不同的层,net类对象会通过传入solver类对象的初始化量来找到xxx.prototxt这样的层描述文件中的内容,来初始化这个layer类的vector容器。然后训练的过程:train函数执行solver的solve函数,solve函数会根据设定的参数循环调用net类的forwardbackward函数,net类的forforward和backward函数主要负责控制前向和反相的顺序,按照顺序调用不同的layer前向和反向的函数,而layer类中的forward和backward函数负责不同层实现的细节。

6.代码流程图