13 November 2013

CQRS Revisited

So, I have a project coming up that could really benefit from Messaging, CQRS, and Event Sourcing. In my first attempt at some of these things, I was going into a brownfield scenario that caused me to have to make a lot of unfavorable trade-offs. What follows is an attempt to work out the pieces for this new project in a greenfield scenario.

Things I'm settled on:

Command service

An MVC action receives the posted JSON and deserializes it into the appropriate .NET command. I carefully chose MVC action / JSON for its wide applicability. Almost any platform can send an HTTP POST.

I am considering hosting this as a Web API project in a Windows service to mitigate IIS configuration / maintenance. All the domain aggregate logic will live here. In the future, this could be partitionable by installing this on other servers and using a hashing function on the client, or by some other partitioning scheme.

So, once the command arrives to the command service, it needs to be passed to the appropriate handler. I have a convention-based message deliverer that I have written for this purpose and to make handler maintenance less of a chore. I call it LocalBus. Essentially, a handler only has to implement an interface (IMessageHandler). When instantiated, LocalBus looks for any class that implements this interface. Then it looks for any void Handles(? message) methods on the class, creates cached delegates, and maps them to message types. Then all you have to do is call LocalBus.Send(message) for single-handler messages or LocalBus.Publish(message) for multi-handler messages. It uses a thread/queue per handler to prevent concurrency problems and preserve ordering.

For the command side, I will only be using LocalBus.Send obviously. This will throw if there is not exactly 1 handler for a message.

The command handler will load the aggregate and call the appropriate method with the appropriate parameters for the command. At this point the domain can throw an error, which will get caught by the command service and be returned to the client. No events are saved in this case. If there is no error while running the aggregate method, then the handler will save the events, and the command service returns nothing.

As for the domain infrastructure, I am starting with CommonDomain. I'm using the interfaces almost as-is, but there are several things I am doing differently with base classes. One optimization that I started using with my last project was to have the aggregate's state as a separate class and have all the Apply methods on that state object. So I went ahead and built that into the AggregateBase. This also makes snapshot generation automatic (just return the state object). I've made even more changes to SagaBase, but more on that later.

Event Sourcing

This project can derive a lot of benefit from event sourcing. I am looking at GetEventStore.com for the event store. It is has a built-in subscription queue for event listeners (denormalizer, process managers, and external integrators). Its performance also looks to be quite good. The client is interested in a fail-over configuration, which it supports.

I plan on creating a template Windows Service for listeners that I can just reuse for different purposes. It will be responsible for remembering it's queue position by writing it to a file. So, it can be shut down and restarted to pick up where it left off. This also allows the queue position to be reset to the beginning (handy for denormalizers) by changing the file.

Denormalizers

I'm thinking of having 2 denormalizers initially, which will each be in separate Windows services. One is for operational data. That is, data which is used by views to produce commands. And another for reporting data. That way, I can make different deployment choices for these functions. For example, I can give the operational denormalizer higher priority since the timeliness of the data updates are more important. It gives me choices, anyway.

I will also use the LocalBus here to take the messages received from the event store and locally publish them to all interested handlers inside the denormalizer's app domain. They would then perform whatever steps needed to update their read models. I will probably have to setup the writes to notify when the data is actually written to disk in order to update the stream position.

Process Managers

In the case where there is logic that needs to be executed in response to events that happen, I will have another listener service (which subscribes to the event store). In that service, LocalBus will deliver to PM handlers, which will load and run its business process. The common example is a shipping workflow. The PM looks for OrderReceived and PaymentApplied before it sends the ShipOrder command for a given OrderId. For the most part, I'm modeling these process managers as aggregates in case there needs to be more logic than a simple state machine. I have some things I'm still working out here, which I will go over later.

External Integrators

For this project, I anticipate there being some external integrators who want to listen to the event stream and construct their own data models. There will have to be a new stream projection created and appropriate security setup for them. Then I can just give them a URL and the listener template that I have already been using for denormalizers and process managers.

UI

So far we're looking at WPF for internal and MVC or WebForms for external (depending on developer familiarity). I would probably do HTML5 if possible. The inputs and outputs of the UI are pretty simple. It takes in read model data and user interaction and converts that into commands to send to the command service. (Not that this translation is easy.)

Read Models / Read Layer

I'm looking at storing the operational models in CouchBase. I really like CouchBase for the way it caches things in memory, making reads and writes fast. Internal programs are likely to directly read from CouchBase for speed. However, I will eventually be setting up a Read Layer for other types of clients. This read layer will also likely be an MVC action. In order to generalize (not have to maintain) this read layer, I am considering having the action take the database and view as part of the URL, so getting the correct data is simply a matter of constructing the right URL. Security would still need to be maintained on the read layer.

Eventually, I anticipate needing a SQL database for ad-hoc and BI purposes.

Now for the stuff I'm unsure about:

Process Manager Issues

My initial thought on saving state of a PM is to just save the events that the process manager has seen. However, assuming I saved this in the event store, I would have to save to a separate event store instance, since they are copies of events that were already published by an aggregate stream. Or else, I would need to setup a partition to separate aggregate events from the PM events and have all listeners only listen for the aggregate partition. OR I could save state in a completely different way (different database). None of these options seem very appealing to me.

Then there is the issue of timeout logic (e.g. payment for the order is not received after 24 hrs, cancel the order). My initial thought is that I will have the PM handler listen for a timeout message and call the appropriate method on the PM. This part is no problem, since LocalBus can deliver an arbitrary message inside the AppDomain. One (solvable) problem I haven't yet worked out is how to position the timer in the logic. And there is the issue of storing the Timer's state with the same implications as PM storage. This seems like a good case for storing current state, since there are only 2 possible states of the message (delivered or not), and time stamps can be recorded for both so nothing is lost. So then do I introduce another kind of database (downvote from admins), store to file (downvote from developers), etc.?

I could hook into an external timer. But this is another configuration point (admin downvote). And I would have to host a comm stack of some kind on the PM service in order to receive a callback from the timer service, and secure it from other types of access. And then there's learning the ins and outs of the particular timer framework. Seems like overkill when the timer part would otherwise be pretty simple.

It appears that I am headed toward including some sort of embedded database with the PM service for PM and Timer state storage.

That's all I can think of for now.

23 April 2013

X3:TC X-TREME Trader

I am finding the X-treme Trading achievement to be the worst one so far. It's basically holding me up from starting on the Dead is Dead playthroughs. What follows are my spoilers for getting through that achievement.

Firstly, I tried stations + CAGs and some CLS routes. These are very profitable, and this got me up to Tycoon at a slow but reasonable clip. Then I hit Tycoon, and everything slowed down drastically. As in 5-6 hrs of game time (about a half hour total in SETA) to get 1%. This is far too slow. The big problem with CAGs is that increasing profitability means expanding your stations, which is a pain to me, and it causes you to drop out of SETA a lot. CLSs also require a bit of fiddling to get just right (for me). So UTs ended up coming to the rescue. Setup is a bit quicker and easier. Every once in a while, they turn stupid and stop working, but not too often.

The other key to this is to build, build, build... for the Yaki. Repair your rep with them to the point where you can take missions. Then go to Senator's Badlands or Weaver's Tempest and look for build missions, green plus icon. (In my game, there were no stations in Ocracoke's Storm by the time I went there.) Start doing a SETA cycle -- see previous post about using a timer and SETA to avoid rank loss -- with the local map open and keep an eye out for more build missions. If you have bad rep with any of the races, make sure you can buy the factory they are requesting. :) Do this continually as you train up UTs. (more on UTs below). Over time, you will end up building lots of stations in Yaki space... and you are the only one who can trade with them!

For training UTs (one at a time), I typically start the ship as a Local Trader in Ore Belt. This has been a pretty consistent training ground for me. It seems to get the trader up to UT-capable in 5 or so cycles. After the trader hits level 8 or higher, I'll send them to one of two places. If Yaki space is not very built up with stations, I send them to Power Circle. They will gradually level up and eventually won't return to Power Circle much. However, after Yaki space is populated with stations, I send the Level 8s to Empire's Edge. From there they are in range of the Yaki sectors and they train up to max quickly. With about 30 UTs, it's taking about 2 SETA cycles to get 1%. Up from 5 or 6 cycles with just the CAGs (for food and secondaries) and CLSs.

As for which ships to use for UTs, I have mainly been using Split Caiman SF XLs (10k cargo, 89 m/s). Mistrals are bigger, but are also a lot slower. In case I need to retask them to do something else, I prefer the caimans. I buy them 10 at a time, and send them through the outfitting gauntlet.
  1. Purchase Large versions from Zyarth's Dominion
    I have the hub connected to the neighboring system, so it's convenient to buy from there.
    I purchase the L version with shields already equipped if possible.
    Caution must be exercised because there are frequent Xenon attacks in Zyarth's
  2. Send to Terracorp HQ in Home of Light for Jump drive (and all other upgrades)
  3. One at a time, set all ships to Autojump: Yes, Minimum jumps: 0, Refuel amount: 50 jumps
  4. Send them to the nearest SPP or one of my factories to get fueled
  5. Send them to an equipment dock, and get all upgrades (Split if you want ALL the upgrades, including turbo, carrier command, spacefly collector, but not needed)
  6. Send them to OTAS HQ in Legend's Home for Docking Computer and Triplex
  7. Park them in a protected system until they are needed
Last time, I bought 80 of them and did this all in a row. Since 80 is too many to put in a station, I parked them in space in my home system. Then I would send them 5 at a time to Ore Belt so I always had some in place to start the Local Trader command when the trainee graduated to UT.

This takes a while, and it would be nice to find faster ways to train UTs. But it works, and the UTs + Yaki station building has drastically upped my trade rank earnings. I don't claim this is the best way to do it, but it's working for me.

09 April 2013

Searching for the perfect small business server

One of the big challenges for small businesses, especially service-based organizations, is a server infrastructure that is both resilient of failures and inexpensive. Small businesses can't typically afford to shell out the cash for a SAN and high availability servers. Yet they still need their servers to operate with a high level of reliability.

This post will attempt to describe one solution that I have been designing, and why each choice was made.

Base Computer: Mac Mini (quad-core)
System Drive: External storage in Hardware RAID-1
Backup Drives: Internal Hard Drive, 2x USB Drives (swapped each day, one taken home)
Virtual Machine Software: ???
Virtual Server: Windows Server

So what does this complicated setup gain me?

Mac server + System Drive on external storage
In my testing months ago, it was possible to install OS X onto an external drive and boot from it. Then, I could actually power down the computer, take the external drive and plug it into another (different!) Mac and boot that from the external drive. The original system came up on the different hardware like nothing happened.

This is a great hardware failure recovery story. Say the power supply burns out in your server... Just grab any other (Intel-based) mac, plug in the external storage, hold shift while booting, boot from the external storage, and the server is back up! No sophisticated expertise required.

This is just not possible in Windows or Linux.

The RAID inclusion is to address the fact that hard drives fail pretty often compared to other parts of the system. With a mirror, you can lose a drive without taking the server down. Otherwise, this common failure would result in a "restore from backup" situation. Hardware RAID-1, 2-bay enclosures are not all that expensive (~$200) compared to the cost of unexpected downtime during critical business hours.

Backup drives
Backing up using the built-in Time Machine software. The simple reason for using the internal hard drive as a backup is because it's already there anyway. The 2 USB drive setup allows you to swap out the backup each day so that you can take a backup offsite after hours. Technically, both backups are actually onsite most of the time: one plugged in, and one in your car or on your desk so you can remember to swap it. So if it really bothers you that your backups are onsite most of the time, then you can even go to 3 USB backup drives. You can never have too many, really.

Virtual Server
Let's face it; OS X Server has had some really mixed reviews. On top of that, OS X server might not support the apps that you typically run (e.g. ASP.NET). So why choose? Use Virtual Machine software to run the server you need on top of OS X.

To make this setup work, all data files should actually be shared from the host OS (Mac OS X) so they are backed up by the host OS, and are not internal to the VM. Then the VM simply uses the shared folder from the host OS for server functions (file sharing, web serving, database backups). So the data is automatically backed up by time machine, and not a backup within a VM backup situation.

What about the VM itself? Since the data is hosted external the VM, the VM ends up just being valuable for it's server configuration. Since the VM file will change very often while running, it should be excluded from backup. (Copying a multi-gigabyte file every hour will eat up your Time Machine backup space quickly.) When there is a configuration change, the VM should be copied (probably offline) to a folder that is backed up, so the server configuration is backed up.

The complete restore process ends up being: Restore Time Machine backup to new Mac. Copy VM from backup location to correct location so it can be run. Done.

Another advantage to having the server run in a VM is that you have remote administration capabilities (through the host OS) that ordinarily cost a lot of money to implement with real servers. The main things I'm thinking of there are remote power on/off, booting in a recovery mode, inserting CDs (by connecting ISO images as drives), etc. These things are extremely convenient to be able to do remotely.

VM Software
So the main reason this is still a work in progress is because I have not yet picked out VM software to make this work. My primary low/no-cost candidates are VMWare Fusion and VirtualBox; both of which can be run headless and scripted to start on computer startup. But, there are still wrinkles to iron out. For instance, I'm not sure if folders that come from the host OS can be reshared as Windows file shares. I'll probably have to adjust the design based on experimentation.

Conclusion
This is a work in progress. I still have things to figure out. However, I believe this kind of setup would create a really compelling story for small businesses who require low and remote maintenance with a relatively inexpensive server for it's level of resilience (including data backups).

25 March 2013

X3:TC The lazy way to X-TREME

So, being the OCD person that I am about games, I decided to go for all X3:TC Steam achievements. A couple of the hardest achievement to get are the two X-TREME achievements, one for trading and the other for combat. After spending hours clearing out Xenon and Kha'ak sectors for hours (talk about boring!), I figured there had to be a better way. So I'm going to share my super lazy way to raise rank. (If you also want to work on X-TREME trading at the same time, see bottom note.)

Things you will need:
  • Out of game
    • A timer of some sort, set to 5.5 minutes. I used my cell phone. Use an alarm sound that is not annoying, because you will be hearing it a lot.
    • Something to do for hours. A good TV series, some movies, a blog to write, some homework to do, etc.
  • In-game
    • Osaka
      • Max shields (6x 2GJ)
      • All SSCs, except a few PSPs (I used 4 in front.)
      • 2000 or so Wraith missiles (optional)
    • A Xenon sector with a shipyard. In my game, it was Xenon Core 023.
The process:
  1. Park yourself just outside of SSC range from the docking exit on the station. (It looks like a little shoe, sticking off the spherical station.). I parked sideways to the station.
  2. Set turrets that are able to directly attack the station to Attack Fighters so they don't attack it. Set all others to Attack Enemies.
  3. Save and start your timer.
  4. Turn on SETA.
  5. AFK or whatever you want to do.
  6. When alarm goes off, turn off SETA, and start over at Step 3.
Here's what happens:

The shipyard will continually spawn fighters (L's, M's, and N's). Your Osaka/SSCs will chew them up and raise your rank. SETA will speed up the process dramatically. However, after 60 minutes (or 6 minutes on 10x SETA), your combat rank will start to decay. So you must do something in-game every 6 minutes (the timer is set to 5:30 to give you time to respond). This will raise rank very quickly at first, but at Hero rank, it takes over a cycle (6 minutes) to get 1% rank increase. It might not be the quickest way, but it was the most acceptable way for me.

Sometimes, a fighter will get stuck trying to undock (autopilot lol), and he will keep hitting the station until he dies. If you are close enough to be able to shoot at him while he's stuck in the dock, you will just hit the station over and over until you blow it up. Being as the station is key to this lazy method, avoid this!

Sometimes, a Q patrol will spawn in sector or jump in through the gate and head straight for you. The Osaka with 4 PSPs fitted should be able to take care of that even if you are AFK without getting damaged into hull. If you are watching, all the better; fire a couple of volleys of Wraiths at it to help.

Sometimes, the shipyard will seem to stop spawning fighters. Then if you look at the landed ships in the shipyard, there may be 70+ ships waiting in station. You can either wait it out, and the shipyard will eventually spew out a ton of fighters at once. Or you can jump out of the system, which will cause most of the ships to launch, then jump back in. With possibly multiple M2s, M1s, and at least half-a-dozen Qs being typically backed up in station, it makes for an interesting fight. Then just reposition yourself back near the station and continue as normal.

Trading

There's no reason not to work on X-TREME trading at the same time. To do this, make sure you setup some factories with CAGs or CLS2 routes before-hand. The key to making progress toward the trading achievement is to make sure that your SHIPS are making the profit. Your station selling illicit goods to visiting patrons isn't going to do anything toward the achievement, no matter how profitable. Your ships have to buy below average and sell above average to gain trade rank. The more difference off average, the more progress. So using CLS2/CAG to both buy below and sell above is double-dipping. :) If you neglected to setup some stations beforehand, you can remotely buy, outfit, and setup some CLS2 freighters while sitting outside the Xenon shipyard.

15 March 2013

X3:TC Best Player Ship

So I admit that I like to min-max things a bit when I play games. This tends to involve some research to find the "best" thing for a particular purpose. In my quest to discover the best player ship, the first ship I came across is the Springblossom. It is an amazing ship. What's not to love about 360m/s top speed?

However, after completing a play-through with the Springblossom and doing a little more research, I have discovered a better ship: The overtuned Hyperion, only available from the Poisoned Paranid start (which is only available after completing the Tormented Teladi start mission (which is only available after achieving a certain trade rank with another start)). You can also get the Hyperion through boarding with any type of start, but it's the slower variety with only 169m/s max speed. The overtuned variant can have 230+m/s max speed, which is Good Enough(tm) speed for a player ship, especially with bonus pack turbo boost.

While losing speed over the Aldrin SB, the Hyperion gains so much utility for the player. Here are some pain points that I came across with using the SB and how the Hyperion solves them.

Station Building
When building stations with mines, I always had to jump into a separate ship to tow mines. This is an extra hassle that the Hype removes, because unlike the SB, it can mount a tractor beam. Goodbye tow-truck Dragon.

Exploration
I typically keep 1 docked Kestrel outfitted for Exploration and Asteroid Scanning. In TC, there's really no faster ship for exploration.

Claimed Ships
I lieu of having a 2nd extra jump drive, I typically leave one of my docking bays empty in case I get a bailed fighter or Claim My Ship mission for a fighter. Fighters often either can't use a jump drive or don't have enough cargo to jump to the destination, so transporting them directly in my ship is faster.

For non-fighter ships which are claimed, I have a spare Jump Drive on my exploration Kestrel.

Mission Running
Cargo space is also a benefit not to be under-estimated for mission running... especially those missions that want you to transport things like Radioactive Waste or Entertainment Chips.

Long Range Death
The Hype can use Wraith missiles, and an assortment of other missile types to fit the situation. The Springy can only use Poltergeist and Spectre, the latter of which can't be player-produced. I suppose that is not much of a problem, since I have often not been able to kill ships with Spectres, being as they fire one at a time and some ships can shoot them all down. Poltergeists are also pretty slow vs M5s. My particularly disappointing experience was when doing the Balance of Power plot and defending a transport. I tried with the Springy many times, and could never destroy the fighters before they reached and blew up the transport. The Poltergeists would mostly loop around behind their target and trail it for a while before catching up. With no other missile options in the SB, I ended up having to jump in my Cobra and Flail everything to death.

Combat
Just from a personal preference standpoint, the Springy has never felt quite right to me. It behaves kindof like a forklift with a jet engine. The vast majority of my deaths have been collisions in the Springy, both undocking / scraping stations and trying to strafe larger ships. This is why I am okay with giving up some of the SB's speed for the other benefits.

Although the Springy mounts PMAMLs, a pretty powerful weapon, I found them a bit difficult to use. Against smaller ships, the projectile is too slow to be able to land hits consistently while the opponent is not heading in a straight line. Against larger ships, by the time I got into firing range, the SB was taking a pretty good pounding. Approaching at the correct angle and doing strafe runs minimizes the damage, but the ship is so fast that window of opportunity to fire is pretty small in order to avoid collisions. Slowing down is an option, but I often use Tab and Backspace to control speed. :) The main thing I was able to kill with the PMAMLs were other M6s, by strafing from longer distances. I could kill some M2s, as long as their missile defense wasn't that great, and I had enough Spectres. The cargo space is a limiting factor there.

The PSSCs on the Springblossom are categorically awesome. Fighters pretty much melt, and I often find myself not getting a chance to fire on them directly for the turrets killing them. So I will definitely miss that about the Springy.

Another obvious battle advantage for the Hype are the two docked fighters.

Summing Up

So, for future play-throughs, I'm going to do Poisoned Paranid to get that overtuned Hyperion. It seems to me that Egosoft must have designed the Hype to be the player ship. No other ship has its combination of capabilities.

Now if they just had a better version of the Cobra with more cargo space...

07 March 2013

X3:TC Lessons Learned

Wish I'd known about the game sooner
The game is deep and immensely satisfying, and I wish I'd know about it sooner! At first, I was afraid I made a mistake in buying it because it seemed hopelessly complicated. But I watched some of CmrDave's tutorial videos, then started to discover parts of the game on my own, and eventually found it staggeringly fun. I can also play it like a simulation game and choose to leave it running, letting my empire take care of itself while I sleep or go on a date with the wife.

Download the Bonus Pack
The bonus pack is signed by the X3 developer, Egosoft, so it does not mark your game as modified. It adds some amazing functionality, and I wouldn't play X3 without it. (I tried at first, and I regretted it!) I'll just briefly mention the CLS/CAGs in the Bonus Pack below:

CLS1 = the Trade -> "Start internal commodity logistics" command, from the Commodity Logistics Mk1 software
CLS2 = the Trade -> "Start external commodity logistics" command, from the Commodity Logistics Mk2 software
CAG = the Trade -> "Start commercial representation" command

Like everything in the X3, these take some experimentation to setup just right, but their functionality is game-changing. I use CLS2 extensively for resupply and one-off deliveries. I use CAGs for most of my money-making and station/complex maintenance.

Setup stations early
I thought UTs were great when I first started using them, but little did I know that stations/complexes with CAGs are even better! Complexes in high-sec space are solid and safe money-makers, and you can totally set and forget them (with a CAG or CLS freighter). They make great storage dumps for energy or other resources they consume. You can add stations onto a complex later, and deactivate individual stations. Thus you can repurpose your station over time. Complexes are valuable in many dimensions (e.g. docking, resupply from excess resources), not just for profit. I don't use STs/UTs at all now.

Start training marines early
Even if you don't do much ship capturing, you need marines for some plots and special ships. Marine training takes ages, so start early. Every time you find a station with marines, look for any that have 3+ stars in fighting, buy them, and start them training. Everything but fighting is trainable, so the only important factor for purchasing them is fighting. Keep training them until they are 5 stars in everything else. Eventually, you will need them. I would recommend collecting up to 20 5-star marines over time. This happens to be the amount an M7M holds.

Put complex hubs near gates
This makes it quick for CAGs and resupply ships to dock at the complex. My first complex was built 150km+ from the nearest gate to avoid pirate traffic, and I had to assign it a lot of extra CAGs because each ship was spending more time flying to it than making me money. And then the game spawns pirates to attack your station anyway if you use afk SETA.

Notice that I said "near" gates, not on top of gates. And notice that I said the complex hub, which you can place a little bit away from the stations. With the latest (huge) stations that I build, I have the hub within 20-25km of the gate, but the bulk of the stations are out of the field of normal view. I do this by placing 10 or 20 of the stations pretty far apart and in a line going up out of view. Then the rest of the stations are layered above that. This is a more advanced technique that takes some practice, because the complex hubs don't always get placed where you tell them. So make sure you save before you start!

06 January 2013

In Memory Message Bus Issues

So, I've written a small in-memory message bus.

Today I realize that under certain circumstances, it will still drop messages and create a race condition. For example, a normal workflow might be:


  1. User (via UI) sends command based on read model data
  2. Command is processed, generates events
  3. Events are saved to an event store
  4. Events published to listeners (read models, integrations, etc.)
When a power failure occurs between 3 and 4, then the read model doesn't get updated even though the event store records the event. After a restart, this creates a situation where the read model doesn't show the last change, but it's in the event store. The user, seeing the read model data, will likely try to make the change again, but either a) nothing will happen because no changes get made to the aggregate or b) a concurrency exception will get thrown because the command was issued against an old version of the aggregate.

The easiest way to work around this would be to rebuild the read models on a dirty startup, but rebuilding could take a while. Or I suppose I could write some sort of Sync function that compares the read model version with the event store version and replays the difference. But that could get complicated, and creates a dependency between the read model and event store.

I could solve the problem by putting 3 and 4 in a transaction, but that is not at all ideal, performance-wise. The other alternative that I've been trying to avoid is for the listeners to keep track of the last message they have seen and be able to request catch-up messages.

The latter introduces storage dependencies, since the handler has to remember the last message it saw across restarts. And each handler becomes a bit more complicated as saving the last seen message pointer will have to be done transactionally with handling the event. At that point, the handler might as well listen straight from the event stream rather than try to use the in-memory message bus.

Update: I've ultimately realized that this is a concern for another part of the program, and not the message delivery service itself. The part responsible for feeding events into the in-memory message bus will have to manage its position in the event stream. It can actually just save an event back to the event store (in a different stream) when it updates its stream position. Then on load, it can load it's last position from the event store itself.