문제

I have been looking at the layers.weights output of Keras layers. The shape of the layer weight matrix is listed as (number_of_inputfeatures, dense_layer_neurons).

The first example in docs.

However, all the theoretical courses I saw, as well as in pytorch, layers have weight matrix shape the opposite way where weight matrix shape is given by (dense_layer_neurons, input_features) or (layer_2_neurons, layer_1_neurons)

https://www.coursera.org/lecture/neural-networks-deep-learning/getting-your-matrix-dimensions-right-Rz47X

Why are these two conventions opposite to each other?

Am I missing anything? Can someone please clarify.

Thanks.

도움이 되었습니까?

해결책

This Explains it - weight matrix shape is dependent on how you shape the input data

https://medium.com/from-the-scratch/deep-learning-deep-guide-for-all-your-matrix-dimensions-and-calculations-415012de1568

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top