I had no idea the solution was so simple. Just test the network after each training step, and use the activation values thus produced.
If it really is necessary to get the exact errors produced during training, one could subclass BackPropTrainer and modify the train and _calcDerivs functions to return the error for each individual unit.