Question

Recently I was reviewing my team leader pull request in our Web API hosted by Kestrel. There was a place in our code which is sort of hot path and we are processing frequently over hundred of items sequentially on each request. So I suggested him to process this items in parallel instead of sequentially to speed up the response time of our API, but then he replied with:

In web apps in general we shouldn't do computational stuff in parallel, because the web app is inherently parallel (handles more than one request at a time). If this was a desktop app, or the work was IO - yes for sure.

I know I haven't written a lot of web services and such I am more interested in runtimes and operating systems, but is he right?

His main argument was that if we do parallel processing we will take resources that might otherwise be used by Kestrel for serving requests, although I am not sure if Kestrel is using user level threads or kernel level threads (if that matters.)

Was it helpful?

Solution

He's right.

Presumably, all of the processor cores on the web server are already devoted to handling user requests. If you process your hot path in parallel on the web server, the most likely outcome is that performance will not improve unless the web server is lightly loaded. You might even see a slight performance degradation under heavy load.

One way to solve the problem is to offload the parallel processing onto another computer.

Licensed under: CC-BY-SA with attribution
scroll top