Neuron Visualization

The most straightforward way:

If we want to understand individual features, we can search for examples where they have high values — either for a neuron at an individual position, or for an entire channel.

Why not used? Because the optimization approach separates the things which causing behavior from things that merely correlate with the causes.

Back Propagation (Optimization) method:

And if we want to create examples of output classes from a classifier, we have two options — optimizing class logits before the Softmax or optimizing class probabilities after the Softmax.

Optimizing pre-softmax logins produces better visual quality, Because While the standard explanation is that maximizing probability doesn’t work very well because you can just push down evidence for other classes, an alternate hypothesis is that it’s just harder to optimize through the SoftMax function.

Optimizing Neuron Disadvantages:

There are also neurons that represent strange mixtures of ideas. Below, a neuron to responds to two types of animal faces, and also to car bodies. Examples like these suggest that neurons are not necessarily the right semantic units for understanding neural nets.

Last updated