Question

Imagine a scenario of two different microservices. One to handle Authentication within the service, the other one takes care of User Management. They both have a concept of a User, and will talk about Users through calls to each other.

Where would the Domain model of a "User" belong though? Would they both have a different representation of what a User is on the level of the database? What about when we have a UserDTO to be used in API calls, would they both have one for their respective API's?

What is the general accepted solution for this kind of architectural issue?

Was it helpful?

Solution

In a Microservices architecture, each one is absolutely independent of the others and it must hide the details of the internal implementation.

If you share the model you are coupling microservices and lose one of the greatest advantages in which each team can develop its microservice without restrictions and the need of knowing how evolve others microservices. Remember that you can even use different languages in each one, this would be difficult if you start to couple microservices.

If they are too related maybe they are really one like @soru says.

Related questions:

OTHER TIPS

If two services are sufficiently intertwined that it would be a pain to implement them without sharing DTOs and other model objects, that's a strong sign you shouldn't have two services.

Certainly the example makes little sense as two services; it's hard to imagine a specification for 'User management' so complicated it would keep a whole team so busy they don't have time to do authentication.

If for some reason they were, then they would communicate using what are basically arbitrary strings, as in OAuth 2.0.

You can think of them as two separate Bounded Contexts (in Domain-Driven Design parlance). They should not share any data between them, aside from an ID used for correlating the Authentication context's "User" with the other context's "User". They can each have their own representation of what a "User" is, and their own domain model, which just the information needed to perform their business responsibility.

Remember that a domain model doesn't try to model a real world "thing," but what that thing is in a particular context (such as Identity/Authorization Management, or Human Resources, etc).

They both have a concept of a User, and will talk about Users through calls to each other.

I also agree with what @soru said. If one service needs another service's data, than their boundaries are wrong.

A nice solution is what @pnschofield came up with -- treating your services as Bounded context.

Talking about the subject, put shortly: shared domain models kill service autonomy, turning your microservice system into distributed monolith. Which is apparently even worse than a monolith.

So there is still a general question unsolved -- how to define service or context boundaries, so that they thrive in high cohesiveness and loose coupling goodness.

I came up with a solution to treat my contexts as a business-capability. It's a higher-level business-responsibility, business-functionality, contributing to overall business-goal. You can think of them as of steps your organisation needs to walk though in order to obtain business-value.

My typical sequence of steps I take when identifying service boundaries is the following:

  1. Identify higher-level business-capabilities. Usually they are the similar among organisations from the same domain. You can get a feeling of what it looks like checking Porter's value chain model out.
  2. Within each capabilities, delve deeper and identify sub-capabilities.
  3. Note the communication between the capabilities. Look at what an organisation does. Usually, communication is concentrated within capabilities, notifying the rest about the result of its work. So when implementing the technical architecture, your service should communicate via events as well. This has multiple positive consequence. With this approach your services are autonomous and cohesive. They don't need synchronous communication and distributed transactions.

Probably an example of this technique would be of some interest to you. Don't hesitate to let me know what you think about, since I've found this approach really profitable. Sure it can work out for you as well.

Microservice is not about "share nothing", but "share as little as possible". In most cases "User" is really common entity (just because User is identified by some shared identificator - userId/email/phone). Such kind of entities shared by definition. The User model is out of scope one microservice. So you must have some global schema, where User (just their most common fields) should be placed. In strict case is id only.

This one seems very good micro-service guide: https://docs.microsoft.com/en-us/dotnet/architecture/microservices/, it suggests that micro-services are following a domain-driven design (DDD) and Bounded Context pattern (https://docs.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice), which means they should not share data between them. I could not find a recommendation on what you do and you manage you contracts (DTOs or events).

One option is every service to have its own copy of the contract classes, which might seem a good idea, so every service can evolve on its own. Practically it is not that good idea. In most cases, communication between micro-services physically boils down to exchanging JSON messages with REST or message queues. JSON is case sensitive format, which means that "userID" is practically different than "userId" and "UserId", no matter that semantically this is one and the same. I have seen numerous issues just because of that, and syncing one large landscape is really tough.

What I really like is having a separate library (NuGet, Maven, etc.) which holds the data contracts, so they can be easily reused. In the case of a big landscape, it might be split based on domains. It can evolve and will have different versions. In the case of non-breaking changes, micro-services can continue working with old versions, in case of new API versions, then new versions are released and whoever wants will migrate.

I have heard of an approach where you can have a semantic language to define contracts, and this is compiled to particular classes for different programming languages. The idea is data/contract definition comes first.

Licensed under: CC-BY-SA with attribution
scroll top