Today I realize that under certain circumstances, it will still drop messages and create a race condition. For example, a normal workflow might be:
- User (via UI) sends command based on read model data
- Command is processed, generates events
- Events are saved to an event store
- Events published to listeners (read models, integrations, etc.)
When a power failure occurs between 3 and 4, then the read model doesn't get updated even though the event store records the event. After a restart, this creates a situation where the read model doesn't show the last change, but it's in the event store. The user, seeing the read model data, will likely try to make the change again, but either a) nothing will happen because no changes get made to the aggregate or b) a concurrency exception will get thrown because the command was issued against an old version of the aggregate.
The easiest way to work around this would be to rebuild the read models on a dirty startup, but rebuilding could take a while. Or I suppose I could write some sort of Sync function that compares the read model version with the event store version and replays the difference. But that could get complicated, and creates a dependency between the read model and event store.
I could solve the problem by putting 3 and 4 in a transaction, but that is not at all ideal, performance-wise. The other alternative that I've been trying to avoid is for the listeners to keep track of the last message they have seen and be able to request catch-up messages.
The latter introduces storage dependencies, since the handler has to remember the last message it saw across restarts. And each handler becomes a bit more complicated as saving the last seen message pointer will have to be done transactionally with handling the event. At that point, the handler might as well listen straight from the event stream rather than try to use the in-memory message bus.
Update: I've ultimately realized that this is a concern for another part of the program, and not the message delivery service itself. The part responsible for feeding events into the in-memory message bus will have to manage its position in the event stream. It can actually just save an event back to the event store (in a different stream) when it updates its stream position. Then on load, it can load it's last position from the event store itself.
Update: I've ultimately realized that this is a concern for another part of the program, and not the message delivery service itself. The part responsible for feeding events into the in-memory message bus will have to manage its position in the event stream. It can actually just save an event back to the event store (in a different stream) when it updates its stream position. Then on load, it can load it's last position from the event store itself.
No comments:
Post a Comment