Question

TL;DR - should a simple cache cluster for session storage (using memcache or redis) live on the app's servers (i.e. along with nginx and php) or on its own separate ec2 instance (like elasticache or a customized ec2 instance)?

I'm in the process of using Amazon OpsWorks to set up my web app's infrastructure. I am leaning toward implementing the session cache through memcache instances installed on the app layer itself rather than as its own ec2 instance. For instance:

                 [ Load Balancer ]
                /        |        \
[ App Layer 1 ] – [ App Layer 2 ] – [ App Layer 3 ]  * /w memcache or redis

vs.

                 [ Load Balancer ]
               /         |         \
[ App Layer 1 ]   [ App Layer 2 ]   [ App Layer 3 ]
               \         |         /
                [ Cache Server(s) ]   * ElastiCache or custom ec2 /w memcache or redis

What are the pros and cons here? To me the later solution seems unnecessary, though I can see how a high-traffic website with a really large session cache might need this.

Is there a reason I may not want to run redis or elasticache alongside my nginx/php app server stack? Does it make auto-scaling or monitoring performance more difficult to do perhaps?

Was it helpful?

Solution 2

The main reason to have your cache on your app server is the issue of cost. This is the same idea of not having your DB on the same server as your web or app server.

If you have a small scale application you can probably squeeze all your resources on the same machine, but then your ability to recover from any type of failure (and "everything fails"), you will either lose data or it will take part of your service down for some of your users.

Once you have enough app servers your costs for the cache cluster is smaller per server.

From architecture point of view, when scale and high availability are important, you should have more smaller components than few more complex ones.

For example, if you want to add another app server to your fleet as you have more users, it will be faster to add a server, as you have less software components to install on this server, and the server can already serve previous users as their sessions are stored centrally. If you want to remove an app server (or when you lose one), the users that were served by that server can easily migrate to the other servers as their session/status is stored in the cache cluster.

OTHER TIPS

The two main disadvantages of the 1st solution are:

  • You'll be forced into session stickiness.
  • You're coupling the app's and the cache's scaling events.

While these may be a non issue in your case, generally I try to avoid them whenever possible because they tend to complicate matters in the long run.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top