Question

Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.

It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.

alt text http://learn.iis.net/file.axd?i=1101 (diagram quoted from MS IIS site)

The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.

However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.

Is performance of restarting DNS server something we should worry about?

Thank you in advance.

UPDATE:

Seems like most of you suggest me to use a reverse proxy setup instead:

alt text http://learn.iis.net/file.axd?i=1102

(ARR is IIS7's reverse proxy solution)

However, here are the CONS I can see:

  1. single point of failure
  2. cannot strategically setup servers in different locations based on IP geolocation.
Was it helpful?

Solution

The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.

Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.

The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.

Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).

I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.

If using Apache for the back-end, see this post too for info about how to configure it.

OTHER TIPS

Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.

While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).

EDIT: Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.

If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).

http://cr.yp.to/djbdns.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top