Question

I have been working on a project that is being built around a microservices architecture. We are thinking on using a message broker, such as RabbitMQ, for both synchronous (via RPC) and asynchronous communication between services.

The team is constituted of Java developers, and all of the services are Spring Boot.

Suppose the following scenario:

  1. ServiceA needs to know the address to connect to the RabbitMQ server, and its credentials. ServiceA posts a message to an exchange in the message broker. This payload is built of ClassA serialized as JSON.
  2. ServiceB also needs to know the address to connect to the same RabbitMQ server, and its credentials. ServiceB is subscribed to the same exchange as ServiceA a consumer. ServiceB also needs to have ClassA within the project so that it can deserialize the payload sent by ServiceA.

We dont want to couple or duplicate data/functionality so that we can reuse things as much as possible.

With the current approach we are duplicating the classes between services, so that they can serialize/deserialize it. Also, we are duplicating the information necessary to connect to the RabbitMQ server, and publish/consume messages.

What is the standard approach to this? I feel that this is a common problem of microservice projects but could not find a pragmatic answer yet.

  1. Should we externalize this classes used for serialization/deserialization to a common library? Do microservices architecture have a standard name for such library?
  2. Should we also create a library to share the RabbitMQ server connection info, and credentials, between the services?
  3. Where should the pub/sub logic be? Should we leave each microservice to import the RabbitMQ client dependency, or should we create a library that "teaches" the microservices how to pub/sub certain messages?
Was it helpful?

Solution

The reason why people tend to recommend against shared code in between microservices is that it creates organizational interdependencies in between services and teams. It is OK to use external libraries, of course. And it is also OK to use libraries created inside your organization as long as there is no pressure to always switch to the current / latest version of such libraries. And it must be permitted to fork at any time to make changes without having to coordinate with other teams / services.

It can then also make sense to sometimes consolidate commonly used libraries if and when the teams using them agree to do so. What must be avoided are changes made by or for one team to be forced upon others, effectively recreating the organizational nightmare of a monolithic monster project that microservices are an attempt to escape in the first place 😅

OTHER TIPS

Code shared between projects should indeed be kept in a library, just like all the other shared code that you use. You don't want to implement persistency layers or TLS connection handling for each project, so why would you want to redundantly implement the shared data model and functionality of your own business?

Configuration data, however, does not belong into libraries. It may be reasonable to share it between microservices, but you should use mechanisms available in your deployment system. For example, in a Docker environment, you can mount directories containing endpoint configuration and credentials into each container that needs them, in Kubernetes you'd use Secrets and ConfigMaps.

Licensed under: CC-BY-SA with attribution
scroll top