Question

I have a web service running on 3 servers. The woreking is as follows -Server 1 recieves user requests store it on local db and does some work on it. -Server 2 and 3 are identical and the main work horses, they based on request fetches information from internet and return it. -Server 1 calls server 2/3 via http request over lan.(there is a php script on server 2& 3 so server 1 calls it as http://localip/script.php) -Server 1 alternatevely calls server 2 or 3 (this is done to distribute load) Every query on server 2/3 takes approximately 8 secs to process. Now when I installed a monitoring tool on all servers it found that the server1 is having too much load(it showed number of process>critical limit) Is this not the way to balance load? How can I reduce the load in Server1?

Was it helpful?

Solution

Server 1 is responding to incoming requests from your users. So if you have determined that it is the bottleneck, then you need to focus your attention there.

If server 2 and 3 are only performing queries to the internet, then they probably aren't doing too much work processor/disk wise.

If you have 2 servers configured to do a reasonably easy task, and your primary user-facing server is being overloaded it might be better simply to run all 3 servers as user-facing.

From what you described it sounds like Server#1 is blocking on the response from servers 2 & 3, meaning it has to hold open many connections potentially. If servers 2 & 3 aren't really don't that much work because their task is just internet queries you would probably be better served by merging the user-facing servers and the query engine into one server and distributing user load across all 3 servers.

In this way each server has fewer open user connections. And the query engine still isn't hogging resources (if it is just web requests), so it's not negatively impacting user performance.

Just some thoughts for you. It's not possible to be totally definitive without knowing more about your application.

Additional comments

As mentioned in comments you should look into a proper load balancer, a good hardware load balancer is the first recommendation. Where are you running your servers? If they are in a cloud datacenter such as Amazon EC2, Rackspace, etc then load balancing services are easily available to you.

You can also use a software load balancer. Apache provides this functionality even:

http://httpd.apache.org/docs/current/mod/mod_proxy_balancer.html

Even with your current configuration it would be possible to put Apache in front of one of the servers and then tune it so that the server with Apache (doing the load balancing) gets a lower percentage of the traffic, to offset the expense of the Apache server. In this case you have no hardware change. Though you have an obvious single point of failure, not dissimilar to what you have now though, a proper load balancer generally alleviates the single-point-of-failure problem you currently have with your architecture.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top