I have a linux machine 64bit (centos5.5), 2.83GHz Q9550, 6gb ram and a single SATA 500gb drive.

From this machine I only serve thumbnails, most around 10kb in size and at this point there are about 7 million thumbnails on the server. I have them setup in a /25/25/25/25 folder setup which was recommended to me.

On average the nginx status report shows that im serving about 300 to 400 active connections.

EXAMPLE:

Active connections: 297 
server accepts handled requests
 1975808 1975808 3457352 
Reading: 39 Writing: 8 Waiting: 250 

Now the problem is that this machine is having a very hard time, and is getting slower as my site is gettin busier. The load is always around 8 to 9.

I noticed iostat showing over 100% util.

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.20     1.40 99.80 31.14  1221.56   255.49    11.28   114.14  831.81   7.62  99.84

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.20     0.60 100.80 24.00  1192.00   203.20    11.18   113.77  775.42   8.02 100.04

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.20   314.80 44.80 130.00   598.40  3547.20    23.72   113.76  937.18   5.72 100.02

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     5.40 56.20 110.80   660.80   937.60     9.57   112.37  518.01   5.99 100.04

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.60    12.40 82.80 41.60  1008.00   432.00    11.58   113.66  852.51   8.04 100.04

Below you can see some of my nginx config settings:

worker_processes  6;
worker_connections  4096;

http {
        include                 mime.types;
        default_type            application/octet-stream;
        #access_log             logs/access.log  main;
        sendfile                on;
        #tcp_nopush             on;
        keepalive_timeout       4;
        gzip                    on;
        gzip_http_version       1.1;
        gzip_vary               on;
        gzip_comp_level         2;
        gzip_proxied            any;
        gzip_types              text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
        gzip_buffers            16 8k;
}

My question is, apart from moving to RAID setups, and possibly SSD's, is there anything that I can tweak/tune to get more out of this machine? I have a feeling a server like mine should be able to handle much more than about 300 to 400 active nginx connections per second.

有帮助吗?

解决方案

Along with the noatime option @nos mentioned, you might want to consider the following:

  • in nginx, set access_log off; -- commenting it out doesn't do anything; you need to actively disable it.
  • reduce the number of worker processes. nginx doesn't benefit from more than one worker per CPU.
  • tcp_nodelay on; will help nginx serve files quicker on "live" connections.
  • try playing with tcp_nopush. I've found it best to switch it on, but YMMV.
  • set if_modified_since to before; it will allow nginx to send 304 Not Modified headers rather than re-serving the content.
  • play with the open_file_cache settings
  • reduce the send_timeout so nginx can free-up stale client connections.

As for the rest of your system:

  • hdparam settings. lots of tutorials to help you online, hdparam tweaks will get the best out of your disks.
  • tweak your socket performance settings
  • recompile the kernel with a reduced timer frequency. the default is 1000 Hertz which is great for desktop machines providing video but isn't all that good for servers where a value of 100-250 might be more appropriate
  • disable services like cups and bluetooth

However, I believe the best performance boost would be putting Varnish in front of your nginx server and using it rather than nginx for serving static files. It will keep "hot" files in memory better than nginx can, so that there's little/no disk use for your most-served content.

The main thing however is to monitor EVERYTHING -- don't go with your gut, know what your server is doing and where your bottlenecks are.

其他提示

Of the 7 million files, how many are frequently accessed? If you're looking at 10KB a piece, you'll only be able to store 500,000 files in the file system cache at most, leaving 1 GB of RAM for the running programs and for file system buffers (which store directory information).

If you can't increase the RAM to hold your frequently accessed files, then you'll need a faster disk setup with lower latency. Moving to a 15K drive will double your disk I/O capacity, but moving to an SSD drive is the best bet for your situation.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top