Pergunta

I'm now choosing, which method to use to handle new TCP connections:
- new process per new connection,
- a fixed number of processes, which handle all connections.

The maximum number of connections in my project is 1200. Once a connection is established, it is durable and rarely is reestablished.

Can Linux effectively handle 1200 processes running in parallel on 2 hexa-core Xeon CPU host (24 hw threads in total on host)? Where is this threshold?
I'm not talking about ulimit. I'm asking, will it be not worse performance with "new process per new connection" vs "a fixed number of processes, which handle all connections" ?
Or 1200 processes is too much for Linux, and there will be big overhead on context switching?

Foi útil?

Solução

Linux can handle thousand processes without problems, but it will spend a lot of time forking and doing context switching. Whether your application will be responsive under those conditions really depends on the kind of work it performs.

On today's hardware, the classic "one client - one process" usually becomes a bottleneck when the number of clients reaches a few hundreds (this is very approximate).

In order to serve thousands of clients, new techniques were developed to overcome the forking and switching overhead. It usually involves non-blocking IO, threads or lightweight processes (managed inside the user space process).

Since your number of clients is higher than what can be usually handled with blocking IO, you should really make some benchmarks and consider using another approach.

There is a classic article about this problem to get you started: http://www.kegel.com/c10k.html

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top