What's the correct reasoning behind solving the vanishing/exploding gradient problem in deep neural networks.?

datascience.stackexchange https://datascience.stackexchange.com/questions/45322

  •  01-11-2019
  •  | 
  •  

I have read several blog posts where the solution to solve the vanishing/exploding gradient problem in a deep neural network is suggested to be using Relu activation function instead of tanH & sigmoid.

But, I have encountered an explanation by Prof. Andrew NG lecture that explains that a partial solution to the vanishing gradient problem is a better or more careful choice of the random initialization of weights in your neural network.

i.e the solution is:

To set the variance of Wi to be equal to 1/n, where n is the number of input features that are going into a neuron. Along with the assumption that the input features of activations are roughly mean 0 and standard variance 1. So, what it's doing is that it's trying to set each of the weight matrices w so that it's not too much bigger than 1 and not too much less than 1, therefore, it doesn't explode or vanish too quickly.

  • So, if you are using a ReLu activation function then setting the variance of Wi to be equal to sqrt(2/n) works better**.
  • and if you are using a TanH activation function then setting the variance of Wi to be equal to sqrt(2/n) works better.
  • or in some cases, it's being suggested to use Xavier initialization
  • Also, if we need we can tune of variance parameter as another hyperparameter by multiplying into the above formula and tune that multiplier as part of your hyperparameter search.

Therefore, choosing a reasonable scaling for how to initialize the weights helps weights not to explode too quickly and not decay to zero too quickly, which in turn could help in training a reasonably deep network without the weights or the gradients exploding or vanishing too much and not simply using ReLu!. Please correct me if my understanding is wrong or incomplete!

没有正确的解决方案

许可以下: CC-BY-SA归因
scroll top