Question

When snapshots of aggregates became out of sync with event log i can simply replay my events from early snapshots (which supposed to be in sync). The same i can do when i add/remove new fields or when i modify logic of existing handlers.

In case i need to add new read model (i.e. new report view) i can do the same again - i will replay my events.

But how should i handle situation, when read model became out of sync with the event log? Storing of events and publishing are in one transaction, but updating of read model occurred in another transaction, which can fail. Repeating events from the very beginning can help, but it can take eternity. Do i need a concept of snapshots for the whole read model?

How do you solve this problem? Thank you.

Was it helpful?

Solution

What would be the reason for failure in event handler? How long is "eternity"?

Read model updates rarely fail (unlike command handlers), since the logic inside is extremely simple. Failures are likely to be caused by transient problems (IO/network outage) and would be handled automatically by the message bus.

However, if read model became corrupted for some reason, then the easiest way to reset it and to stream events through. Even millions of events would take reasonably small amount of time. Plus, you can always use Map-Reduce approach.

I would recommend against introducing snapshots to read models. I think this just complicates the architecture without any significant gains.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top