Question

We have a server receiving data from between 500-1500 GPS devices. Each device sends a packet containing around 1-4 GPS coordinates every 10-30 seconds. The server is designed asynchronously with a listener handling connections using Begin- EndAccept, and communication using Begin- EndReceive. Once a packet is received the data is processed and stored in a database.

With few devices (500-700 devices) this takes barely 50 ms, and we have less than 50 concurrent threads running, and a realistic CPU usage (20-40%). However when the server is pressured with connections (1000+) the number of threads explodes to 500-600 and the CPU usage also drops to a few %. The processing time is also increased to several minutes.

Is the asynchronous design bad for this particular scenario with many small packets being sent at this rate, or might the be a problem in the code?

We have currently had to distribute the load across three servers to accomodate all devices, and they are all VMs with 6 CPUs and 4GB memory hosted on a Hyper-V server.

SOLUTION:

The solution I found from the answers from people, was to immediately schedule it as a task using the .Net parallel library, as this is much smarter when scheduling threads across multiple cores:

void EndReceive(IAsyncResult res)
{
   Task.Factory.StartNew((object o) => { HandleReceive(o as IAsyncResult); }, res, TaskCreationOptions.PreferFairness);
}

Now the threads rarely exceed 50.

Was it helpful?

Solution

It sounds like somewhere in your application you're using non-asynchronous IO in which you're blocking on the results of the operation. You may be using proper asynchrony in many places, such as the primary connection with the client from the server, but perhaps you're not when connecting to a database or something like that. This mixing of async and non-async is likely why you're having so many threads being created.

By ensuring you have no blocking IO it should ensure you don't have lots of thread pool threads sitting around doing nothing, which appears to be the situation you're in.

OTHER TIPS

What kind of operations are you doing on the server?

If they are CPU-bound it's useless to have more threads than cores and adding more may clutter your server with a bunch of threads fighting like dogs ;)

In this case you should be more lucky with simple processing loops, one per core.

I have never worked on such many requests at the same time but what you could try is creating as many threads as you have cores on your cpu and then implements a queueing system. Your threads would be consumming the queue one device's coordinate at a time. This way I guess your CPU would be used at full throttle...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top