Performance of containers is very close to bare metal (or, in that case, to VMs, since you will be running in VMs).
Specifically:
- on volumes, disk I/O performance is native;
- outside of volumes, there is a tiny overhead when opening files, and another overhead when doing the first change to a file in the original image (as the file gets copied to the RW layer), but after that, performance is native;
- network connections go through an extra NAT layer, which should amount to <<1ms (rather 0.01 to 0.1ms) until you get 1000s of requests per second; then you can bypass the NAT layer with tools like Pipework;
- CPU performance is native;
- memory performance is native by default; but if you enable memory accounting+limiting there is an impact (a few %, up to 5-10% for memory intensive workloads which grow and shrink their memory usage a lot).
Status monitoring should be exactly the same as for regular apps.
Network configuration: if your apps expose well-known TCP ports, you will be fine with Docker port-mapping features. If you need large ranges of TCP ports, or dynamic allocation of ports, the above-mentioned Pipework will help.
Don't hesitate if you have other questions! We also have an IRC channel (#docker on Freenode) and a mailing list (docker-user on Google groups).