Question

I've implemented a simple multi-threaded HTTP server for my programming assignment a couple weeks ago. In this assignment users specified a document size via browser (like localhost:port/250 for a 250 bytes long document) and my program created a html page with that size.

And I've tested it's performance with a benchmarking tool called apachebench. Performance was mostly what I've expected from a multi-threaded program.

But, there was an interesting test result which is bugging me for a while now. The transfer rate of requests dramatically decreases with increased request sizes. Here is my sample apachebench test results using this command line arguments;

ab -n 1000 http://localhost:port/RequestSize

RequestSize      TX 
(bytes)         (KB/s)
70              212,21
150             268,75
250             364,58
350             447,23
500             497,36
650             496,73
800             491,48
1000            432,63
1250            405,17
1750            347,19
3000            241,95
5000            161,46
10000           84,58
20000           44,05

the level of concurrency only makes this distribution less steep.

What is the cause of this behavior?

Thanks,

Was it helpful?

Solution

You could fairly easily narrow down where the delay is with some micro-benchmarks in the server. Beware, though, that micro-benchmarks may lead you down the wrong optimization path(s). You should at least be able to get a decent idea of where your time is spent, though.

If you are generating the payload on the fly, it may actually be the data generation which is becoming a problem at the larger sizes. The MTU on your loopback link should be much higher than the standard for LAN/WAN links (1500), so that shouldn't be a factor.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top