Keras layer weights shape is different compared to other conventions
-
13-12-2020 - |
문제
I have been looking at the layers.weights
output of Keras layers. The shape of the layer weight matrix is listed as (number_of_inputfeatures, dense_layer_neurons)
.
The first example in docs.
However, all the theoretical courses I saw, as well as in pytorch, layers have weight matrix shape the opposite way where weight matrix shape is given by (dense_layer_neurons, input_features)
or (layer_2_neurons, layer_1_neurons)
Why are these two conventions opposite to each other?
Am I missing anything? Can someone please clarify.
Thanks.
해결책
This Explains it - weight matrix shape is dependent on how you shape the input data
제휴하지 않습니다 datascience.stackexchange