Question

A month or two straight away building image classifiers, I just sandwich the BatchNormalization layer between conv2d. I wonder, what it does, but I have seen my model learn faster in presence of these layers.

But I'm worried if theirs any catch? I read somewhere that, I don't need dropout layer if I'm using batch normalization! Is it true?

And also tell me in which manner should I use this layer, which kind of problems I should and shouldn't use this layer.

Just write down anything you know about the layer that you think will help me!

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top