Firstly, the in-memory persistence implementation, whose primary purpose is testing, is not transaction aware. In your original example, client 2 will simply append it's event to the stream. Try running the above with a persistence store that supports transactions (SQL & Raven, but not Mongo).
Secondly, specifying the min/max revision when opening a stream is used for different purposes:
- When re-hydrating an aggregate, and no snapshots are available, you would specify (min:0, max:int.MaxValue), as you are interested in retrieving all of the events.
- When re-hydrating an aggregate and a snapshot is available, you would specify (min:snapshot.Version, max:int.MaxValue) to get all events that have occurred since the snapshot.
- When saving an aggregate, you would specify (min:0, max:Aggregate.Version). The Aggregate.Version is derived during re-hydration. If same aggregate is re-hydrated at the same time somewhere else and saved, you'll have a race condition and a
ConcurrencyException
will occur.
Support for most of this would be encapsulated in a domain framework. See AggregateBase and EventStoreRepository in CommonDomain
Thirdly, and most importantly, updating >1 stream in a single transaction is a code smell. If you are doing DDD/ES, the stream represents a single aggregate root which, by definition, is a consistency boundary. Creating/updating more than one AR in a transaction breaks this. NEventStore's transaction support was (reluctantly) added so it could work with other tools, i.e. transactionally read a command from MSMQ/NServiceBus/whatever and handle it, or, transactionally dispatch a commit message to a queue and mark it as such. Personally, I'd would recommend that you do your best to avoid 2PC.