Question

Can someone provide me with a graphical representation of a deep-learning network?

Like this is layer 1, layer 2, layer 3, etc. and the weights between the neurones at all the layers and the neurones in the layers, how they are all connected and so on.

I don't want anything big I just want them to be shown in matrices because I can't actually put my finger on how to represent the whole network as interconnected matrices.

Even if the matrices are 2x2 it's fine I just want to have an example to build on.

Was it helpful?

Solution

Matrix representation

You will not be modelling the neurons as matrices. Instead you only need to represent the weight layers as individual matrices.

0 hidden layers
In this instance you would only need a single matrix. This will be of size:

n x m //    n: inputs,   m: outputs

The elements of the matrix will represent the individual weights in the given layer accordingly:

neural network representation

n hidden layers
Each weight layer has its own matrix. The matrix will be of size:

n x m //    n: inputs to this layer,   m: outputs from this layer

A graphic visualization of a network with a single hidden layer: n layered network

The calculations

You will have to incrementally perform a dot product between the input signals and the weight matrices:

input_vector: 1 x n matrix,    n: number of inputs
weight_layer: n x m matrix,    n: number of inputs to this layer     m: number of outputs from this layer

input_vector.dot( weight_layer ) # forward calculation
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top