I have typically be against asynchronous commands for the simple fact that it leaves the user hanging. However, when considering responsiveness under load, there is something to be said for making this asynchronous.
My current line of thinking is to send the command asynchronously with the expectation that the result of that command (success for failure) will eventually be sent back to the client. That way, at least the client knows if there is a problem. This is what I'd call asynchronously synchronous.
This brings up a couple of interesting issues. One is what to do with the UI while that is going on. Do I show a spinner and make the user wait? The response will happen pretty quickly, so I'm leaning toward this initially. Maybe instead, I should just keep track of running commands and notify the user if one fails. How do they recover from the failure in this case? There are interesting opportunities for design here. Some of the right answer depends on the user's workflow.
So now the other issue is brought up here: Getting a positive command result doesn't mean that the read models have been updated, since this also happens asynchronously. This introduces the idea of the client being able to subscribe to read model changes. Ultimately, the user only wants one of two things to happen when they submit a command. 1) Ideally, the command succeeds and their view is updated (to verify). 2) Their command fails and they are given enough information to resolve the problem. Therefore the the only 2 things the client program will be interested in listening for are: command failure and read model updates.
That obviously doesn't cover some edge cases like network failure. After all if the network fails while I am blocking for it, then I get notified about it, but if I just never receive a message that I was expecting, then I need to account for that.
No comments:
Post a Comment