Question

Background:

I run a two-sided platform with small businesses on one side, and users on another. As a part of our product I plan to introduce "websites" as part of the product.

Idea is: a local business has a domain, and then I want to "turn that into a website" based on one of our templates.

My setup:

Today we have a .NET application (ASP.NET MVC) running on an Azure web app. We also have a fine API that runs.

This is based on top of a standard SQL database (hosted in Azure).

My question:

Imagine you have 100 customers (low traffic websites), and you have 100 custom domains (.com).

We want to give these 100 customers "the same" default website but with their own texts. These texts come from our API.

How would you architect and make this setup? Would you spin up 100 web apps? Run bigger pools in same web app? How would you map the domain to the correct web app?

My proposed solution

Being a bit clueless I am thinking this approach:

Make a new .NET application running as a single instance web-app. Then I would point their DNS (A-record) towards my application, and then somehow look at the current domain (example). Based on the domain, I would get the data based on the API.

Then I would probably make a new web app when this one starts to hit performance problems.

Does my solution make sense? Am I insane? Would this break?

(Answers with links to article(s) or even a book is of course a fine answer!).

Was it helpful?

Solution

First off, whether or not the customer website provider is running off of the same server/app or a distinct one is a performance implementation detail, and the appropriate underlying software architecture should easily support either. A good software architecture supports a wide array of different system architecture.

A good software architecture is determined by the problem you are trying to solve, and a good system architecture is determined by how that solution, and the hardware it runs on are used in the real world. To worry about whether it should be on the same server, or different servers, without usage statistics, is imo, premature optimization.

Let's start off by modeling your problem domain for this particular feature, this needs to be its own independent class library, not in your webapp:

public class ClientWebSite
{
    string Url;
    string CustomerId;
    string TextBlurb1;
    string TextBlurb2;
    //?maybe a reference to your already existing customer object?
    //ect
}

And also in its own class library

public ClientWebSiteRepository
{
      //NOTE: Just becuase we are using natural/string keys in our classes, does not mean we should not use surrogate keys in our DB!
      ClientWebSite GetWebsiteByUrl(string url);
      ClientWebSite GetWebsiteByClientId(string clientId);
      void SaveSite(ClientWebSite site);
}

With this structure, we can easily use anyone of the these three system architectures by simply having our client app referencing our DAL and BLL projects:

  1. Same application
  2. Different Application, same server
  3. Different Application, different server

I would not fault you for using any of these system architectures, but I think the safest bet in your situation is to start with #2, then move to 3# only when usage demands it. But again, your usage statistics are the only thing that can determine this for sure.

The client facing application, with all the client DNS records pointing to it would look something like this:

class ClientWebSiteController
{
     Action Index()
     {
         var model = new ClientWebsiteRequestModel():
         model.url = request.URL;
         return Index(model);
     }

     Action Index(ClientWebsiteRequestModel model)
     {
          var viewModel = new ClientWebsiteViewModel();
          var website = siteRepo.GetClientWebsiteByUrl(model.url);
          viewModel.blurb1 = website.blurb1;
          viewModel.companyName = customerRepo.get(website.CustomerId).CompanyName;
          return View("ClientWebsite", viewModel);              
     }
}
Licensed under: CC-BY-SA with attribution
scroll top