Question

I'm testing my Apache & PHP setup (default configuration on Ubuntu) with the `ab' tool. With 2 concurrent connections I get fairly satisfactory results:

ab -k -n 1000 -c 2 http://localserver/page.php

Requests per second:    184.81 [#/sec] (mean)
Time per request:       10.822 [ms] (mean)
Time per request:       5.411 [ms] (mean, across all concurrent requests)

Given it's a virtual machine with low memory, it's okay. Now I want to test a more realistic scenario: Requests spread among 100 users (read: connections) connected at the same time:

ab -k -n 1000 -c 100 http://localserver/page.php

Requests per second:    60.22 [#/sec] (mean)
Time per request:       1660.678 [ms] (mean)
Time per request:       16.607 [ms] (mean, across all concurrent requests)

This is much worse. While the number of requests per second overall has not fallen significantly (184 to 60 #/sec), the time per request from a user perspective has risen sharply (from 10 ms to over 1.6 seconds on average). The longest request took over 8 seconds, and manually connecting to the local server with a web browser took almost 10 seconds during the tests.

What could be the cause and how can I optimize the concurrency performance to an acceptable level?

(I'm using the default configuration as shipped with Ubuntu Linux Server.)

Était-ce utile?

La solution

As a start, you need to look at the amount of memory each script consumes, ie. memory_limit and then divide the VMs memory by this. This should be the number of connections you can handle at the same time without running out of memory and therefore the server starting to do thrashing.

You will get to a very low amount of connections. So you need to

  • increase memory
  • decrease memory_limit
  • make each connection finish faster

The next step would be to see if any database queries take longer than expected, I usually start looking with the mysql-slow.log at queries longer than 0.5s Also eliminate queries that don't use indexes if you can.

Next after this install a monitoring tool like collectd and see if there is enough CPU available.

From a business perspective, it depends if this is a new website/system or something existing. If it is new and growth is dramatic, you need to overspend on hardware for a while, a system that doesn't work or crashes under traffic erodes the trust in a business very fast. On top of that, its usually not worth it to optimise a lot if the hosting bill is under $1000 per month. If that is not affordable, then you may need to go back to your business model.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top