If you think about it, it's gonna be that way even if your target is 1 and the output is 1.
The reason this doesn't happen is because you won't usually get a real 1 or 0 from a properly functioning backpropagation network, because you are using a sigmoid activation function at each node, so it's more likely that you get values that are close to 0 or 1. If you get 0 or 1 from your activations, it means the sigmoid saturated. You can see how the sigmoid function behaves here.
EDIT: I think I should focus on the saturation. So suppose you have a 1 at the output layer. This means that your sigmoid function returned 1, which means the value on the input was approaching to 6. If you look at the sigmoid plot, you'll see that when x is close to 6, the output is close to 1 and the derivative of the output would be close to 0 as well. This is a situation when we say the sigmoid "saturated". You do want to avoid situations like that. Hope it's clearer now.