Question

I am testing my new site on the following setup * 2 m1.large ec2 instances as web server behind a elastic load balancer * both webserver has memcache/apc/nginx/php-fpm installed * 1 m1.large ec2 instance for mongo db when I run this from a remote server ab -n 100 http://beta.domain.com/ I get the following results

Server Software:        nginx/1.1.19
Server Hostname:        beta.domain.com
Server Port:            80

Document Path:          /
Document Length:        50817 bytes

Concurrency Level:      1
Time taken for tests:   127.032 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      5117100 bytes
HTML transferred:       5081700 bytes
Requests per second:    0.79 [#/sec] (mean)
Time per request:       1270.322 [ms] (mean)
Time per request:       1270.322 [ms] (mean, across all concurrent requests)
Transfer rate:          39.34 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       21   42 100.4     26    1018
Processing:  1119 1228  69.4   1218    1488
Waiting:      941 1016  41.8   1015    1159
Total:       1144 1270 121.6   1246    2199

Percentage of the requests served within a certain time (ms)
  50%   1246
  66%   1271
  75%   1281
  80%   1295
  90%   1364
  95%   1483
  98%   1547
  99%   2199
 100%   2199 (longest request)

APC hit rate is around 98%. Also I am checking the memcached log file as I do this test and I can see the ab is hitting both servers and hitting the values on memcached (All hit, no misses). But still RPS value is 0.79. Isn't this very low? Am I missing the point here?

EDIT

Also all static content (css, js, images) are being served from Amazon S3 in gz format and have 1 year expiration date.

*EDIT 2 *

I ran the same test with -c 50 parameter and here is the result

Concurrency Level:      50
Time taken for tests:   49.332 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      5118200 bytes
HTML transferred:       5082800 bytes
Requests per second:    2.03 [#/sec] (mean)
Time per request:       24666.145 [ms] (mean)
Time per request:       493.323 [ms] (mean, across all concurrent requests)
Transfer rate:          101.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       20   33  10.0     30      58
Processing:  7656 20521 6139.1  21495   29328
Waiting:     7538 20415 6131.9  21407   29243
Total:       7687 20554 6140.3  21540   29380

Percentage of the requests served within a certain time (ms)
  50%  21540
  66%  23255
  75%  25744
  80%  26204
  90%  27414
  95%  28098
  98%  29259
  99%  29380
 100%  29380 (longest request)

** Load Generation **

I guess ab does that, doesn't it? Sorry, I am quite new to benchmarking :) I also added -c 50 and ran the test again. See the results above.

** Testing page **

This page lists 20 products with image, description etc. Does some backend calculations but results are all cached in memcache, so it never actually hits the database (mongo). I can see this from memcache log file.

** What else **

Here is the result of vmstat on one of the servers during the ab test

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 7161224  21028 199896    0    0    26    16   29   30  1  0 99  0

and iostat

Linux 3.2.0-29-virtual  10/02/2012  _x86_64_    (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.67    0.00    0.31    0.15    0.34   98.54

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvdap1            4.25        48.00        32.40     178001     120160
xvdb              0.17         0.54         0.00       1993          4
Was it helpful?

Solution

Whether or not that is low depends on a number of factors.

Load Generation

How are you generating load? If you send one request at a time, that value may not be unreasonable. How many concurrent requests are you sending?

What does the tested page do?

How heavy is the processing on that page? How long does it take to fully load the page without any load on the system?

What else to check

Assuming both of the above don't turn up issues, look and see what resources are being taxed on the servers.

Use

vmstat

iostat

to figure out where you are losing performance. Is the CPU pegged? How long is the disk queue? Are you using all available memory?

AWS Specific

If you have any significant IO you will find that EBS storage is not exactly fast. The solution to that problem has always been to stripe multiple EBS volumes into a software RAID

http://www.mysqlperformanceblog.com/2009/08/06/ec2ebs-single-and-raid-volumes-io-bencmark/

Recently AWS released a new service called Provisioned IO that will guarantee a certain amount of disk IO from an EBS volume. It is more expensive, but does not require any special configuration to use (you must select the appropriate option when creating your EBS volume).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top