Question

According to Fowler (here), a repository "mediates between the domain and data mapping layers, acting like an in-memory domain object collection." So, for example, in my Courier Service application, when a new run is submitted, my application service creates a new Run aggregate root object, populates it with values from the request then adds it to the RunRepository before calling the Unit of Work to save the changes to the database. When a user wants to view the list of current runs, I query the same repository and return a denormalized DTO representing the information.

However, when looking at CQRS, the query would not hit the same repository. Instead, it would perhaps go directly against the data store and always be denormalized. And my command side would evolve to a NewRunCommand and Handler that would create and populate a NewRun domain object then persist the information to the data store.

So the first question is where do repositories fit into the CQRS model if we aren't maintaining an in-memory collection (cache, if you will) of domain objects?

Consider the case where the information submitted to my application service contains nothing but a series of ID values that the service must resolve in order to build the domain object. For example, the request contains the ID # of the courier assigned to the run. The service must lookup the actual Courier object based on the ID value and assign the object to the NewRun using the AssignCourier method (which validates the courier and performs other business logic).

The other question is, given the separation for queries and potential absence of repositories, how does the application service perform the lookup to find the Courier domain object?

UPDATE

Based on some additional reading and thought after Dennis' comment, I'll rephrase my questions.

It appears to me that CQRS encourages repositories that are merely facades over the data access and data storage mechanisms. They give the "appearance" of a collection (like Fowler describes) but are not managing the entities in-memory (as Dennis pointed out). This means that every operation on the repository is pass-through, yes?

How does a Unit of Work fit into this approach? Typically a UoW is used to commit changes made to a repository (right?) but if the repository isn't maintaining the entities in-memory, then what role does a UoW have?

With regards to a 'write' operation, would the command handler have a reference to the same repository, a different repository or perhaps a UoW instead of a repository?

Was it helpful?

Solution

I've read about CQRS systems that maintain a simple key value store on the command side to represent an application's state, and others that merely correlate messages (using some sort of saga) and utilise the query store to represent an applications state instead. Either way there'll no doubt be a persistence technology involved with these approaches, but the repository pattern in these cases would be an unnecessary abstraction over the top of it.

My experience with CQRS has only ever been with event sourcing though, where we've replayed past events to rebuild aggregates that encapsulate and enforce business logic and invariants. In this case the repository pattern is a familiar abstraction that can provide a simpler way of retrieving any of these aggregates.

With regards to the query side I'd recommend getting as close to the data store as possible, by this I mean avoid any repositories, services or facades etc. between your UI (whatever that may be) and your data store.

It might help to see an example of these approaches in use. Maybe take a look at the following projects:

In the case of NES the repository merely provides a familiar interface for adding and reading aggregates directly to and from the unit of work.

Some more links that might help:

OTHER TIPS

I'm not sure how orthodox this is - but in a current project I have a repository for my aggregate entity root. This repository has only two methods, Get, and ApplyEvents.

All events implement a common interface for their type - for orders there's OrderEvents, etc. I personally put the business logic of each event into a polymorphic method, so that adding new types of events becomes very easy.

For Get, the repository goes to the event store, and gets all events in scope for the type (for example, a single store location orders). It then does a replay of the events to arrive at a current state of the entity for all the events it's given. It can also work from a snapshot, so you're not recreating every event each time you load. You can also have a general Events repository to even abstract out how you store events, and retrieve them based upon specifications.

ApplyEvents takes in a list of events, and then changes the state of the entity based upon these, and returns it. Note that you're giving the repository the option to recreate the entity, not just alter it! This works well with a functional type of programming, but means it's best to avoid object equality (obj1 == obj2) in C# or Java. I'd argue only ValueObjects, and not Entities, should ever have equality anyways though.

Here's how it works in practice (C#)- I've got Orders, and I want to add an item. currentOrder.Items is returning an empty list. Then I do

Assert.IsFalse(newEvent.Items.Any())
IOrderEvent newEvent = eventFactory.CreateOrderItemEvent(myItemID);
currentOrder = orderRepository.ApplyEvents(currentOrder, newEvent);
Assert.IsTrue(newEvent.Items.Any())

I should now see currentOrder.Items have one entry.

The downsides here are that all my processing is done through the events, rather than having my business logic in the Entity. However in my case, where almost all my objects need to be serializable (basically POCOs) and work on multiple systems, this actually works out well.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top