Turns out that the HAProxy
website already covers this area (my mistake of overlooking it). The answer is basically a lot of low level optimizations. Directly copied from the HAProxy website:
HAProxy involves several techniques commonly found in Operating Systems architectures to achieve the absolute maximal performance :
a single-process, event-driven model considerably reduces the cost of context switch and the memory usage. Processing several hundreds of tasks in a millisecond is possible, and the memory usage is in the order of a few kilobytes per session while memory consumed in
Apache
-like models is more in the order of megabytes per process.O(1)
event checker on systems that allow it (Linux
andFreeBSD
) allowing instantaneous detection of any event on any connection among tens of thousands.Single-buffering without any data copy between reads and writes whenever possible. This saves a lot of
CPU
cycles and useful memory bandwidth. Often, the bottleneck will be theI/O
busses between theCPU
and the network interfaces. At 10Gbps
, the memory bandwidth can become a bottleneck too.Zero-copy forwarding is possible using the
splice()
system call underLinux
, and results in real zero-copy starting withLinux
3.5. This allows a small sub-3 Watt device such as aSeagate Dockstar
to forwardHTTP
traffic at onegigabit/s
.MRU
memory allocator using fixed size memory pools for immediate memory allocation favoring hot cache regions over cold cache ones. This dramatically reduces the time needed to create a new session.work factoring, such as multiple
accept()
at once, and the ability to limit the number ofaccept()
per iteration when running in multi-process mode, so that the load is evenly distributed among processes.tree-based storage, making heavy use of the
Elastic Binary
tree I have been developping for several years. This is used to keep timers ordered, to keep the runqueue ordered, to manage round-robin and least-conn queues, with only anO(log(N))
cost.optimized
HTTP
header analysis : headers are parsed an interpreted on the fly, and the parsing is optimized to avoid an re-reading of any previously read memory area. Checkpointing is used when an end of buffer is reached with an incomplete header, so that the parsing does not start again from the beginning when more data is read. Parsing an averageHTTP
request typically takes 2 microseconds on aPentium-M 1.7 GHz
.careful reduction of the number of expensive system calls. Most of the work is done in user-space by default, such as time reading, buffer aggregation, file-descriptor enabling/disabling.