Question

I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.

Was it helpful?

Solution

I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).

Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.

Here's one way (among several alternatives) that this might look in an haproxy config file:

frontend main
    bind *:80
    use_backend app1svr if { hdr(host) -i app1.example.com }
    use_backend app2svr if { hdr(host) -i app2.example.com }

backend app1svr
    server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1

backend app2svr
    server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1

This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.

You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.

Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.

Note these examples do use "Anonymous ACLs," of which the documentation says:

It is generally not recommended to use this construct because it's a lot easier to leave errors in the configuration when written that way. However, for very simple rules matching only one source IP address for instance, it can make more sense to use them than to declare ACLs with random names.

http://cbonte.github.io/haproxy-dconv/configuration-1.4.html

For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.

I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.

OTHER TIPS

Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000

But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com
IMHO this is best solution, if i got you right.


Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.

You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.

Each virtual host entry should look something like:

<VirtualHost *>
   ServerName app1.example.com
    DocumentRoot /var/www/html/app1/
    DirectoryIndex index.html
</VirtualHost>
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top