Question

At work, we have a large internal application which has been under development for close to 2 years now; I've just recently joined the project and some of the architecture has me slightly perplexed, so I'm hoping someone here can provide some advice before I go out to ask the architects these same questions (so I can have an informed discussion with them).

My apologies if the below is a little long, I just want to try to paint a good picture of what the system is before I ask my question :)

  • The way the system is setup is that we have one main web application (asp.net, AngularJS) which does mostly just aggregates data from various other services. So basically it is a host for an AngularJS application; there is literally one MVC controller that bootstraps the client side, and then every other controller is a WebAPI controller.

  • Calls from the client-side are handled by these controllers, which is always deployed to boxes that do nothing but host the Web Application. We currently have 4 such boxes.

  • However, the calls are then ultimately routed through to yet another set of WebAPI applications (typically these are per business area, such as security, customer data, product data, etc). All of these WebAPIs get deployed together to dedicated boxes as well; we also have 4 of these boxes.

  • With a single exception, these WebAPIs are not used by any other parts of our organisation.

  • Finally these WebAPIs make yet another set of calls to the "back end" services, which are typically legacy asmx or wcf services slapped on top of various ERP systems and Data stores (over which we have no control).

  • Most of our application's business logic is in these WebApis, such as transforming legacy data, aggregating it, executing business rules, the usual type of thing.

What has me confused is what possible benefit there is in having such a separation between the WebApplication and the WebAPIs that serve it. Since nobody else is using them, I don't see any scalability benefit (i.e there's no point in putting in another 4 API boxes to handle increased load, since increased load on the API servers must mean there is increased load on the Web servers - therefore there has to be a 1:1 ratio of Web server to Api server)

  • I also don't see any benefit at all of having to make an extra HTTP call Browser=>HTTP=>WebApp=>HTTP=>WebAPI=>HTTP=>Backend services. (that HTTP call between WebApp and WebAPI is my problem)

  • So I am currently looking to push to have the current WebAPIs moved from separate solutions, to just separate projects within the WebApplication solution, with simple project references in between, and a single deployment model. So they would ultimately just become class libraries.

  • Deployment-wise, this means we would have 8 "full stack" web boxes, as opposed to 4+4.

The benefits I see of the new approach are

  • Increase in performance because there is one less cycle of serialisation/deserialisation between the Web application and the WebAPI servers
  • Tons of code that can be deleted (i.e. no need to maintain/test) in terms of DTOs and mappers at the outgoing and incoming boundaries of the Web Application and WebApi servers respectively.
  • Better ability to create meaningful automatied Integration Tests, because I can simply mock the back-end services and avoid the messiness around the mid-tier HTTP jumps.

So the question is: am I wrong? Have I missed some fundamental "magic" of having separated WebApplication and WebAPI boxes?

I have researched some N-Tier architecture material but can't seem to find anything in them that can give a concrete benefit for our situation (since scalabilty isn't an issue as far as I can tell, and this is an internal app so security in terms of the WebAPI applications isn't an issue.)

And also, what would I be losing in terms of benefits if I were to re-organise the system to my proposed setup?

Était-ce utile?

La solution

One reason is security - if (haha! when) a hacker gains access to your front-end webserver, he gets access to everything it has access to. If you've placed your middle tier in the web server, then he has access to everything it has - ie your DB, and next thing you know, he's just run "select * from users" on your DB and taken it away from offline password cracking.

Another reason is scaling - the web tier where the pages are constructed and mangled and XML processed and all that takes a lot more resource than the middle tier which is often an efficient method of getting data from the DB to the web tier. Not to mention transferring all that static data that resides (or is cached) on the web server. Adding more web servers is a simple task once you've got past 1. There shouldn't be a 1:1 ratio between web and logic tiers - I've seen 8:1 before now (and a 4:1 ratio between logic tier and DB). It depends what your tiers do however and how much caching goes on in them.

Websites don't really care about single-user performance as they're built to scale, it doesn't matter that there is an extra call slowing things down a little if it means you can serve more users.

Another reason it can be good to have these layers is that it forces more discipline in development where an API is developed (and easily tested as it is standalone) and then the UI developed to consume it. I worked at a place that did this - different teams developed different layers and it worked well as they had specialists for each tier who could crank out changes really quickly because they didn't have to worry about the other tiers - ie a UI javscript dev could add a new section to the site by simply consuming a new webservice someone else had developed.

Autres conseils

I think there is no right/wrong answer here. Have you asked your collegues about the purpose of this architecture?

From how I understand your descriptions, the "WebAPI" in your Architecture serves as kind of self-made Middleware. Now you can research what advantages there are in usages of a Middleware. Basically your Webapp would never need to be adopted if you changed your backoffice system (as long as the WebAPI interface keeps the same).

To go further: Imagine you have a customer database (backend service) and you have 5 web apps communicating with that database. If you replace the customer database system with a new system (like from another vendor and you can't influence the webservice interfaces) you would most likely need to change the communication layer of all 5 web applications. If you communicate via your WebAPI, you'd just have to change the communication layer of the WebAPI.

Basically, Layer Architecture is nowadays considered as both: Pattern and Anti-Pattern. (See: Lasagna-Code) If you have just 1 system without plans to expand this any further in the next few years I would rather consider this as anti-pattern. But this woudl be an unrealistically taff judge since I don't know the exact circumstances/situation :)

Licencié sous: CC-BY-SA avec attribution
scroll top