Gluon Hybrid Intro
Note: a newer version is available here.
Deep learning frameworks can be roughly divided into two categories: declarative and imperative. With declarative frameworks (including Tensorflow, Theano, etc) users first declare a fixed computation graph and then execute it end-to-end. The benefit of fixed computation graph is it’s portable and runs more efficiently. However, it’s less flexible because any logic must be encoded into the graph as special operators like scan
, while_loop
and cond
. It’s also hard to debug.
Imperative frameworks (including PyTorch, Chainer, etc) are just the opposite: they execute commands one-by-one just like old fashioned Matlab and Numpy. This style is more flexible, easier to debug, but less efficient.
HybridBlock
seamlessly combines declarative programming and imperative programming to offer the benefit of both. Users can quickly develop and debug models with imperative programming and switch to efficient declarative execution by simply calling: HybridBlock.hybridize()
.
HybridBlock
HybridBlock
is very similar to Block
but has a few restrictions:
All children layers of
HybridBlock
must also beHybridBlock
.Only methods that are implemented for both
NDArray
andSymbol
can be used. For example you cannot use.asnumpy()
,.shape
, etc.Operations cannot change from run to run. For example, you cannot do
if x:
ifx
is different for each iteration.
To use hybrid support, we subclass the HybridBlock
:
Hybridize
By default, HybridBlock
runs just like a standard Block
. Each time a layer is called, its hybrid_forward
will be run:
Hybrid execution can be activated by simply calling .hybridize()
on the top level layer. The first forward call after activation will try to build a computation graph from hybrid_forward
and cache it. On subsequent forward calls the cached graph, instead of hybrid_forward
, will be invoked:
Note that before hybridize, print(x)
printed out one NDArray for forward, but after hybridize, only the first forward printed out a Symbol. On subsequent forward hybrid_forward
is not called so nothing was printed.
Hybridize will speed up execution and save memory. If the top level layer is not a HybridBlock
, you can still call .hybridize()
on it and Gluon will try to hybridize its children layers instead.
hybridize
also accepts several options for performance tuning. For example, you can do
Please refer to the API manual for details.
Serializing trained model for deployment
Models implemented as HybridBlock
can be easily serialized. The serialized model can be loaded back later or used for deployment with other language front-ends like C, C++ and Scala. To this end, we simply use export
and SymbolBlock.imports
:
Two files model-symbol.json
and model-0001.params
are saved on disk. You can use other language bindings to load them. You can also load them back to gluon with SymbolBlock
:
Last updated