Question

I would like to train a network with multiple output layers.

in->hidden->out 1
          ->out 2

Is this possible? If so how do I setup the datasets and trainer to accomplish training.

No correct solution

OTHER TIPS

As you are looking into splitting your output in order to have several SoftMax regions, you can use PartialSoftmaxLayer provided by PyBrain.

Note that it is limited to slices of the same length, but its code can inspire you if you require a custom output layer:

https://github.com/pybrain/pybrain/blob/master/pybrain/structure/modules/softmax.py

No. You can have multiple hidden layers, like this

in -> hidden 1 -> hidden 2 -> out

Alternatively, you can have multiple output neurons (in a single output layer).

Technically, you can set up any arrangement of neurons and layers, connect them however you like, and call them whatever you want, but the above is the general way of doing it.

It would be more work for you as the programmer, but if you want to have two different outputs, you can always concatenate your outputs into one vector and use that as the output for the network.

in --> hidden --> concatenate([out1, out2])

A possibly significant drawback of this approach is that if the two outputs are of different scales, then concatenation will distort the error metric you use to train the network.

However, if you were able to use two separate outputs, then you'd still need to solve this problem, likely by somehow weighting the two error metrics that you use.

Potential solutions to this problem could include defining a custom error metric (e.g., by using a variant of weighted squared error or weighted cross-entropy) and/or standardizing the two output datasets so that they exist in a common scale.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top