The number you should consider is Avg error rate
, your WP
+Nginx
+Memcached
configuration looks not too bad, so by my opinion this is good choice.
Maybe you can increase the -m
parameter in memcached
to match half of your RAM.
BUT:
memcached
do not guarantee that the data will be available in memory and you have to be prepared for a cache-miss storm. One interesting approach to avoid a miss-storm is to set the expiration time with some random offset, say 10 + [0..10] minutes, which means some items will be stored for 10, and other for 20 minutes (the goal is that not all of items expire at the same time).
Also, no matter how much memory you will allocate for memcached
, it will use only the amount it needs, e.g. it allocates only the memory actually used. With the -k
option however (which is disabled in your config), the entire memory is reserved when memcached
is started, so it always allocate the whole amount of memory, no matter if it needs it or not.
This number of 451
connections can actually vary, it depends. It is always good idea to look at the averages when performing benchmarks, i.e. better to have 0% Avg error rate
and 451
served clients, than 65% Avg error rate
and 8200+ served clients.
However, in order to offload some more resources, you can use additional caching for Wordpress, there are plenty of plugins, I personally wrote one for that purpose.
Regarding the nginx
configuration, you can tune some parameters there also:
worker_rlimit_nofile 100000;
worker_connections 4000;
# optmized to serve many clients with each thread, essential for linux use epoll;
# accept as many connections as possible,may flood worker connections if set too low
multi_accept on;
# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# to boost IO on HDD we can disable access logs
access_log off;
# copies data between one FD and other from within the kernel
# faster then read() + write()
sendfile on;
# send headers in one peace, its better then sending them one by one
tcp_nopush on;
# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;
# number of requests client can make over keep-alive -- for testing
keepalive_requests 100000;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 10;
# if client stop responding, free up memory -- default 60
send_timeout 2;
# reduce the data that needs to be sent over network
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";