Question

I have several uncommon questions related to the design of REST services hosted on a server communicating with local or remote applications.

As an example, let's say that I have 2 machines ("A" and "B") located in two different countries.

The machine "A" host a server that run a REST endpoint (=ip adress with an associated port) to interact with REST resources.

From the client side, I have an application that interacts with REST resources. I was considering to run the application on the machine "B", it makes sense since REST is a distributed architecture with remote communications.

However,I wonder if it makes sense to host the application locally on the machine "A" ? The main advantage I see here is related to network latency. And thus, if the application requires some latency constraints, the right choice could be to host it locally.

However, I'm not sure if it's true (regarding this link: https://stackoverflow.com/questions/4002545/restful-communication-between-local-applications-is-a-good-idea). And I would like to have some feedback about this.

Also, I'm not sure what to do technically if running local applications make sense. For example, I could imagine to have 2 endpoints for a same resource: one for local application with resource running on localhost interface, one for remote application with resource running on ethernet interface (with ip address and port). But again, i don't know if it makes sense or not.

To summarize my questions:

  • Does local applications running on the same machine that host the server, will have faster REST communication ?
  • For faster "local" REST communications, does a server needs to expose 2 endpoints (one for localhost / one for remote) ?

Thanks!

Was it helpful?

Solution

You asked:

Does local applications running on the same machine that host the server, will have faster REST communication?

Absolutely. Your data won't have to be serialised onto the wire.

For faster "local" REST communications, does a server needs to expose 2 endpoints (one for localhost / one for remote) ?

No. Your server will expose on a particular address, and your client can either talk to localhost (127.0.0.1) or the actual server IP address. In the latter case your client can work on a remote host without reconfiguration, but fundamentally your server won't care whether your client is local or remote.

(there are perhaps subtleties in the above relating to multiple network interfaces, network stack optimisations etc., but they don't affect the underlying principles)

If you can colocate your two services, that may make sense. As soon as you have two machines and a bit of wire inbetween, then you're likely introducing complexity and a degree of unreliability. If you can colocate, and you accept that taking that one machine down will knock out both services, that's likely going to be a more manageable solution for you. Note that colocating can have adverse effects if (say) your machine doesn't have sufficient CPU/memory resources for running both processes.

Note that distributing across a network leaves you open to the Fallacies of Distributed Computing. Some of these fallacies will still apply in your case, but colocating will reduce your exposure.

Licensed under: CC-BY-SA with attribution
scroll top