If you consider the fact that no hardware actually offers a 16-bit floating-point depth buffer, this looks about right.
Some consoles (e.g. Xbox 360) expose a 24-bit floating-point depth buffer, but generally to do things portably you have to use a 32-bit depth buffer in order to have it represented in floating-point. This necessitates an ugly 64-bit packed Depth + Stencil format if you need a stencil buffer and a floating-point depth buffer, which is part of the reason ATI hardware in D3D and on the Xbox 360 offers a 24-bit floating-point + 8-bit depth+stencil format that is not available in OpenGL. If GL had this, it would be called GL_DEPTH24F_STENCIL8
.
Now, while a 16-bit floating-point value is rather inadequate to store your depth, it is probably wasting space / memory bandwidth to store your normals in this format. You should try a format like RGBA 10:10:10:2 (fixed-point) or RGB 11:11:10 (floating-point) to store your normals (if RGBA8 is actually inadequate), and then with the space you saved you could afford a 32-bit floating-point depth buffer.
As it stands right now you are not actually using a floating-point depth buffer in the first place. You are trying to pack your depth into one channel of a 16-bit per-component color-buffer. I would suggest you use an actual depth attachment and use a 24-bit fixed-point or 32-bit depth image format if your current solution is not cutting it. You already have to output something to a dedicated depth buffer during normal rendering anyway.
Right off the bat by using a floating-point format to store your depth you lose 1-bit of precision because it has to store a rather meaningless sign bit (fixed-point depth buffers do not). Also, since depth values are already in the range 0-1 generally, the enhanced range of a floating-point number really does not apply. At 16-bit, you lose more than you gain by storing the depth in a floating-point format. Fixed-point depth is really the best way to go when you use fewer bits to store the depth.