22 March 2021

Eleven to Twelve Twenty One

Success sold out.

Unwatchable anyway.

Failure tickets available.

One please.

Popcorn and soda.

A moment's pleasure.

Remain seated for the entire performance.

For Death hangs overhead.

Continuous playback enabled.

Enjoy the show.

23 June 2017

VS 2017 runaway usage

My computer started to feel sluggish yesterday. Little things like the mouse taking a second to respond to my movements (in any app). I finally got around to investigating, and discovered this.




On this computer, 19% CPU is ~1.5 cores of compute being used. Even after I closed Visual Studio the Bootstrapper continued to run and eat CPU/Memory.

Based on my other experiences in VS2017 so far, especially the new inbuilt F# tools, I would say it is still needs to bake for a few more updates. Maybe it will stabilize by the time VS 2019 comes out. :-p

But seriously, I do use it now for production work. But it has a few rough edges yet.

03 April 2017

What is the benefit of Functional Programming?

I've seen some posts lately trying to come to grips with or warn against functional programming (FP) patterns in F#. So, I thought I would take a step back and explain how to derive tangible benefits from FP, what those benefits are, and how functional patterns relate to that.

By functional patterns, I mean things like partial application, currying, immutability, union types (e.g. Option, Result), record types, etc.

The most important principle to follow to reap benefits from FP is to write pure functions. "Wait," I hear you say, "you can't make a useful app with only pure functions." And right you are. At some point, you have to perform side effects like asking for the current time or saving something to a database. Part of the learning process for FP (in F#) is getting the sense for where to do side effects. Then the continuing journey from there is how to move side effects as far out to the edges of your program as possible to maximize the amount of pure functions on the inside. The goal being that every significant decision is made by pure functions.

"What's so great about pure functions?" A commonly touted benefit is test-ability, which comes almost by definition. But in my experience it goes much further than that. Programs which are largely pure functions tend to require less time to maintain. To put it another way, pure functions allow me to fearlessly refactor them. Because of their deterministic nature combined with type safety, the compiler can do most of the hard work in making sure I haven't made a gross error. Pure functions certainly don't prevent logic errors (common one for me: incorrectly negating a boolean). But it does prevent a lot of the hard-to-track-down and tedious problems I've dealt with in the past.

"I hear you, but I am skeptical." Yes, and you should be. First, there's a trade-off. And second, I wouldn't expect you to take my word for it. Probably the largest trade-off is with immutable data structures. Instead of mutating data (e.g. updating a value in an input array, which introduces a side effect to the caller), you return a new copy of the data with different values. As you can imagine, copying data costs more (cpu/memory) than updating in place. Generally, I find there's not a significant enough difference to worry about it. But in certain cases performance can dominate other concerns. One benefit of F# is that you can easily switch to doing mutation if you need to.

As for not taking my word for the maintenance/refactoring benefits, I can tell you how I discovered this and suggest you try it. I have to admit, I did NOT discover these benefits in depth in F#. Since F# is multi-paradigm, including OO, it's easy to skip or only go part-way toward learning how to model something with pure functions. It's also easy to get confused about how to do it with so many avenues available. My path of discovery must be credited to Elm -- a relatively young compile-to-javascript language/platform and the inspiration for Redux. Elm is a functional language in which you can only use immutable types and pure functions (although you can call out to Javascript in a controlled way). At first this seems restrictive and a bit mind-bending.

Because Elm's restrictions were unfamiliar, the first code I wrote in Elm was not the greatest, even after coding in F# for several years. (To be fair most of my F# code is for back-end, including performing side-effects.) My Elm code worked, but it just didn't fit quite right or it seemed fiddly. But as I discovered different ways to model things and tried them, I found Elm was actually very amenable to refactors, even sweeping epic ones. As I learned new ways to model in an environment of pure functions, the constraints placed on my code by Elm actually allowed my code to easily grow with me. Large refactors still take a bit of work, but my experience is that it's no longer an impossible-to-plan-for amount of "risky" work. It's just normal work. And I'm not the only one with that experience.

Eventually, I came to understand that this was not because of some white magic that Evan Czaplicki wove into Elm, but because of the insightful design choice that all user code consists of pure functions. A pure function is a contract which a compiler can verify. The contract doesn't specify everything about the function, but it can help you to resolve the majority of the really boring problems at compile time.

So here is where functional patterns come into the picture. They are tactics which help ensure that the compiler can verify the contract on a pure function. I'll run through some examples.

  • On .NET at least, null has no type and so the compiler cannot verify the contract is fulfilled if you provide a null value. So there's F#'s Option (Maybe in Elm), which not only preserves type, but also lets you know you should handle the "no value" case. Now the contract can be verified again.
  • Throwing exceptions invalidates a pure function's contract, because exceptions are hidden return paths that are not declared. Enter Result***, which provides a way to return either a success value or an error value in a declarative way, and thus the contract can be verified again.
  • Mutation (modifying data in-place) cannot be observed or accounted-for in a function declaration. It's also the source of some really hard-to-find bugs. So enter records which are immutable by default. To "change" data, you have to make a copy of the record with different values. Now the contract can be verified again.

Since the basic primitive to provide a "contract" is a function, partial application and currying follow naturally as conveniences. Types like Option and Result are possible (in a non-awkward way) because of union types, aka algebraic data types. And you can make your own union types. They are great for modeling mutually exclusive cases with their own sets of data. Like PaymentType is CreditCard with card data OR Check with #. If you model this with a normal class, you'd likely have nullable properties for both cases. That gives the potential for one, both, or none to be present, which requires a little more work on your part to ensure the data is valid. With union types, the compiler can do some of this work for you.


I hope you can see that functional patterns primarily exist to support writing pure functions. You can't write a useful program entirely out of pure functions, so the goal should be to maximize their usage in order to maximize refactorability. Learning how to do this is a process. Don't fret if you need to use an impure solution to get by until you discover another way. Also realize that writing pure functions in F# requires a little discipline, because the language doesn't provide a way to enforce that. The basic guidelines I follow to produce pure functions are 1) do not mutate input values and 2) result is deterministic... that is: given the same inputs, the output is always the same. I also tend to establish an outer boundary of my programs where side effects (database ops, HTTP calls, current time, etc.) are allowed. From this boundary I can call side-effect functions and pure functions. But pure functions should not call side effect functions. So far this has worked out really well in my F# code. But I'm sure I have more to learn. :)

*** Use Result judiciously. Since exceptions are foundational to .NET, practically all common external libraries throw exceptions. You'll create a lot of work for yourself trying to convert every interaction with an external library into a Result. My rule of thumb is to just let my code throw when I have to execute multiple operations against an external library which will throw. Higher up in the program, I'll convert an exception from my code to a Result error case. I also use Result for business logic failures like validation errors.

09 March 2017

Shuffling without side effects

Randomization can be separated from shuffling to produce repeatable results.

A while back, I put some code on GitHub to do shuffling. It uses an unbiased shuffle algorithm and has the capability to shuffle with System.Random or RNGCryptoServiceProvider for strong randomization. I have been using it to shuffle exam questions and answers to generate CBT tests. However, randomization presents a significant problem for automated tests (and functional programming) because by definition it depends on unpredictable side effects. I didn't want to use the seeded randomization from System.Random, because it has very weak randomization. Eventually I realized I could use a cryptographically secure RNG, but still shuffle in a repeatable way.

The first thing to realize is the purpose of the RNG for shuffling is to pick a value in the list. One easy way to represent a pick is as a percentage (a value between 0 and 1). Whether there are a thousand things in a list or two, a percentage can be mathematically converted into a position on the list. For a shuffle, you need as many picks (percentages) as you have items in the list, minus 1. Depending on the algorithm used, either the first or last pick always goes into the same position.

Edit: This is similar to how Consistent Hashing works.

So then, it becomes pretty easy to shuffle using an array of random percentages which were generated ahead of time. Here's an "inside-out" shuffle algorithm which returns a shuffled array. The algorithm uses an imperative style and this code reflects that, but this shuffle function is still pure.

/// Shuffle an array using a fair shuffle algorithm with the provided ratios.
/// ratios is an array of values between 0.0 and 1.0, having at least the same length as a, minus 1.
/// a is the array to be shuffled.
/// Returns a shuffled copy of the input array.
let shuffle ( ratios : float array ) ( inArray : 'a array ) =
    let n = Array.length inArray
    if n = 0 then
        Array.empty
    else
        let copy = Array.zeroCreate<'a> n
        copy.[0<- inArray.[0// copy first element over
        for endIndex = 1 to n - 1 do
            let randIndex =
                ratios.[endIndex - 1]
                |> Ratio.scaleBetween 0 endIndex
            if randIndex <> endIndex then
                copy.[endIndex<- copy.[randIndex// move element at randIndex to end
            copy.[randIndex<- inArray.[endIndex// move next item to randIndex
        copy


Converting the percentage (aka ratio) to an index has a deceptive amount of edge cases, but you can find my version of `Ratio.scaleBetween` here with inline explanation. The original shuffle algorithm looped from 0, but a percentage was wasted to get a number between 0 and 0 (duh). So I manually code the first iteration.

For testing, you can use a static array of percentages. And every time you run the shuffle with the same percentages, you will get the same result. In this particular algorithm, an array of all 1.0 values will produce the same ordering as the original list.

For generating the percentages at run-time, you can use the RNG of your choice, even a cryptographically strong one, and optionally store them for auditing. Here's some code I use (at the edge of my system where I allow side effects) to get randoms for shuffling.

let private fx_generateRandoms count =
    use rngCsp = new RNGCryptoServiceProvider()
    let buffer = new RandomBuffer( rngCsp.GetBytescount )
    Array.init count ( fun _ -> buffer.GetRandomRatio() )


Pulling percentages from `RNGCryptoServiceProvider` also has some edge cases and optimizations, so I made the helper class `RandomBuffer`, conveniently on the same GitHub repo.

11 November 2016

API edge cases revisited

For pipeline steps, nesting is often better than chaining.

In my last post, I talked about the difficulty of chaining together steps in an API and explored a way to construct an API pipeline that was more flexible to change.

However, it has two problems. 1) There is a lot of boilerplate, especially the error branches. 2) It looks horrible.

So, I experimented and found another way that seems much better. I actually discovered this idea when working with Elm. However, I couldn't implement it in quite the same way. The syntax I was going for looked something like this:

AsyncResult.retn request
|> AsyncResult.bind (fun request -> getHandler request.Path
|> AsyncResult.bind (fun (handlerPath, handle) -> getUser request.User
))


However, this does not compile in F# because the parser doesn't like the spacing. Also the parenthesis become a bit bothersome over time. So then I tried using the inline operator for bind (>>=), and eventually I stumbled upon a style that I found amenable. Here is the code for my query API pipeline. Look how fun it is.

let (>>=) x f =
    AsyncResult.bind f x

let run connectionString (request:Request) (readJsonFunc:ReadJsonFunc) =
    let correlationId = Guid.NewGuid()
    let log = Logger.log correlationId
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
 
    AsyncResult.retn request
    >>= fun request ->
        log <| RequestStarted (RequestLogEntry.fromRequest request)
        getHandler request.Path
 
    >>= fun (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        getUser request.User
 
    >>= fun user ->
        log <| UserFound (user.Identity.Name)
        authorize handlerPath user
 
    >>= fun claim ->
        log <| OperationAuthorized claim
        getJson readJson ()
 
    >>= fun json ->
        log <| JsonLoaded json
        let jsonRequest = JsonRequest.mk user json
        handle connectionString jsonRequest
 
    >>= fun (resultJson, count) ->
        log <| QueryFinished count
        toJsonResponse resultJson
 
    |> AsyncResult.teeError (ErrorEncountered >> log)
    |> AsyncResult.either id Responder.toErrorResponse
    |> Async.tee (ResponseCreated >> log)
    |> Async.StartAsTask


This style is a vast improvement in readability as well as (lack of) boilerplate. Now each step is actually nested, but F# lets me write them without nested indentation.

The primary value proposition of nested binds (e.g. x >>= fun x' -> f1 x' >>= f2) instead of chain binds (e.g. x >>= f1 >>= f2) is the easy access to all previous step results. For example, handle is defined in the 2nd step but is used in the 5th step. Notice that I could easily swap steps 2 and 3 without affecting any subsequent steps. (Select/cut getHandler down to thru the log statement, and paste it below the UserFound log statement. No other refactoring needed!)

If I were to do chaining, Steps 3 and 4 would have to carry handle through their code into their output so that Step 5 has access to it. This creates coupling between steps, as well as extra data structures (tuple or record passed between steps) that need maintenance when steps change.

I think the next thing this needs is inlining the logging. But for now, I'm pretty happy with it.

For reference, here is the old version of the code which enumerates every branch explicitly. (Also uses nesting instead of chaining.)

let run connectionString (request:Request) (readJsonFunc:ReadJsonFunc) =
    let correlationId = Guid.NewGuid()
    let log = Logger.log correlationId
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
    let logErr = tee (ErrorEncountered >> log) >> Responder.toErrorResponse
    let logResponse = ResponseCreated >> log
    async {
        log <| RequestStarted (RequestLogEntry.fromRequest request)
        match getHandler request.Path with
        | Error err ->
            return logErr err
 
        | Ok (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        match getUser request.User with
        | Error err ->
            return logErr err
 
        | Ok user ->
        log <| UserFound (user.Identity.Name)
        match authorize handlerPath user with
        | Error err ->
            return logErr err
 
        | Ok claim ->
        log <| OperationAuthorized claim
        let! jsonResult = getJson readJson ()
        match jsonResult with
        | Error err ->
            return logErr err
 
        | Ok json ->
        log <| JsonLoaded json
        let jsonRequest = JsonRequest.mk user json
        let! outEventsResult = handle connectionString jsonRequest
        match outEventsResult with
        | Error err ->
            return logErr err
 
        | Ok (resultJson, count) ->
            log <| QueryFinished count
            return Responder.toJsonResponse resultJson
 
    }
    |> Async.tee logResponse
    |> Async.StartAsTask

17 September 2016

Functional Programming Edge Case: API Edges

Update: See this followup post for a more elegant style that still provides the benefits of nesting vs chaining.

So I've run across a modeling problem where the "proper functional way" is unsatisfactory. I experimented for several days on alternatives. In the end the Pyramid of Doom prevailed. Since F# is whitespace-formatted, I suppose they are more like "Steps of Doom". This is a command handling pipeline for an API.


let run store request (readJsonFunc:ReadJsonFunc) at =
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
    let logErr = tee (ErrorEncountered >> log) >> Responder.toErrorResponse
    let logResponse = flip tuple DateTimeOffset.Now >> ResponseCreated >> log
    async {
        log <| RequestStarted (request, at)
        match getHandler request.Path with
        | Error err ->
            return logErr err
 
        | Ok (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        match getUser request.User with
        | Error err ->
            return logErr err
 
        | Ok user ->
        let tenantId = getClaimOrEmpty Constants.TenantClaimKey user
        let userId = getClaimOrEmpty Constants.UserIdKey user
        log <| UserFound (tenantId, userId)
        match authorize handlerPath user with
        | Error err ->
            return logErr err
 
        | Ok claim ->
        log <| OperationAuthorized claim
        let! jsonResult = getJson readJson ()
        match jsonResult with
        | Error err ->
            return logErr err
 
        | Ok json ->
        log <| JsonLoaded json
        match deserializeMeta json with
        | Error err ->
            return logErr err
 
        | Ok meta ->
        log <| RequestMetaDeserialized meta
        match checkTenancy user meta.TargetId with
        | Error err ->
            return logErr err
 
        | Ok x ->
        log <| TenancyValidated x
        // TODO page result from event store
        let! loadResult = loadEvents store meta.TargetId
        match loadResult with
        | Error err ->
            return logErr err
 
        | Ok slice ->
        log <| EventsLoaded (slice.FromEventNumber, slice.NextEventNumber, slice.LastEventNumber)
        match checkConcurrency meta.Version slice.LastEventNumber with
        | Error err ->
            return logErr err
 
        | Ok version ->
        log <| RequestVersionMatched version
        match deserializeSlice slice with
        | Error err ->
            return logErr err
 
        | Ok inEvents ->
        log <| EventsDeserialized inEvents
        let! outEventsResult = runHandler (meta:RequestMeta) (request:Request) user json handle inEvents
        match outEventsResult with
        | Error err ->
            return logErr err
 
        | Ok outEvents ->
        log <| HandlerFinished outEvents
        match outEvents with
        | [] ->
            log <| NoEventsToSave
            return Responder.noEventResponse ()
 
        | _ ->
        let eventMeta = createEventMeta tenantId userId
        match serializeEvents meta.TargetId meta.Version meta.CommandId request.CorrelationId eventMeta outEvents with
        | Error err ->
            return logErr err
 
        | Ok eventDatas -> // bad grammar for clarity!
        log <| EventsSerialized
        let! eventSaveResult = save store meta.TargetId meta.Version eventDatas
        match eventSaveResult with
        | Error err ->
            return logErr err
 
        | Ok write ->
            log <| EventsSaved (write.LogPosition.PreparePosition, write.LogPosition.CommitPosition, write.NextExpectedVersion)
            return Responder.toEventResponse ()
    }
    |> Async.tee logResponse
    |> Async.StartAsTask


Ok, so the Steps of Doom are invisible here because F# does not require me to indent nested match statements. Only the return expression requires indentation. Maybe it's more of the Cliffs of Insanity.

Now before you reach for your pitch fork and torch, let me explain how I got there and why it may really be the best choice here. (Not in general though.)

Let's talk about how to change this code. I can insert/remove/edit a step at the appropriate place in the chain (above another match expression), then fix or add affected variable references in the same function. That's it. Now let me describe or show some alternatives I've looked at.


The standard functional way to represent "callbacks" is with monads. Inside the above, you can see that I'm already using Result (aka Either). But that one alone is not sufficient, since I need to keep values from many previous successes. I also need to do some logging based on those values. And some of the calls are Async as well. So I would need some combination of Async, Result, and State. Even if I was interested in building such a franken-monad, it still doesn't solve the problem of representing this pipeline as one state object. Consider what the pipeline state object might look like:


type PipelineState =
    { Request: Request
      ReadJson: unit -> Async
      // everything else optional!
      HandlerPath: string option
      ...
      ...
      ...
      ...
      ...
    }


Working with this state object actually makes the pipeline harder to reason about. The user of this object can't be sure which properties are available at which step without looking at the code that updates it. That's quite brittle. Updating the pipeline requires updating this type as well as code using it.

You could eliminate the question of whether properties were available at a given moment by nesting states and/or creating types per pipeline step. But then you have a Pyramid of Doom based on Option (aka Maybe). Updating the code around this is also quite a chore with all the types involved.

Instead of keeping a pipeline state object, you could use an ever-growing tuple as the state. This would make it easier to tell what was available at what step. However, this has a very large downside when you go to change the pipeline. Anytime you modify a pipeline step and its corresponding value, you have to modify it in all subsequent steps. This gets quite tedious.


I tried something similar to the tuple method, but with a list of events instead. I was basically trying to apply Event Sourcing to make the events both the log and the source of truth for the pipeline. I quickly realized updating state wasn't going to work out, so I used pattern matching on lists to get previous values as needed. However, it suffered from the same problem as the growing tuple method, plus it allowed for unrepresentable states (unexpected sequences in the list). This is pulled from an older version with slightly different pipeline steps.


let step events =
    match events with
    | EventsSaved _ :: _ ->
        createResponse Responder.toEventResponse ()
 
    | NoEventsToSave :: _ ->
        createResponse Responder.noEventResponse ()
    
    | ErrorEncountered error :: _ ->
        createResponse Responder.toErrorResponse error
 
    | [ RequestStarted (request, _, _) ] ->
        Async.lift getHandler request.Path
 
    | [ HandlerFound _; RequestStarted (request, _, _) ] ->
        Async.lift getUser request.User
 
    | [ ClaimsLoaded user; HandlerFound (handlerPath, _); _ ] ->
        Async.lift2 authorize handlerPath user
 
    | [ RequestAuthorized _; _; _; RequestStarted (_, readJson, _) ] ->
        getJson readJson ()
 
    | [ JsonLoaded json; _; _; _; _ ] ->
        Async.lift deserializeMeta json
 
    | [ RequestMetaDeserialized meta; _; _; _; _; _ ] ->
        loadEvents meta.TargetId
 
    | [ EventsLoaded es; RequestMetaDeserialized meta; _; _; _; _; _ ] ->
        Async.lift2 checkConcurrency meta.Version <| List.length es
 
    | [ ConcurrencyOkay _; EventsLoaded es; RequestMetaDeserialized meta; JsonLoaded json; _; ClaimsLoaded user; HandlerFound (_, handle); RequestStarted (request, _, _) ] ->
        runHandler meta request user json handle es
 
    | [ HandlerFinished es; _; _; _; _; ClaimsLoaded user; _; RequestStarted (request, _, _) ] ->
        save request user es
 
    | _ ->
        Async.lift unexpectedSequence events


As you can tell, the above has a number of readability issues in addition to the challenges listed.


So probably the most compelling alternative would be to create one large method which takes in all the values and uses Retn + Apply to build up the function until its ready to run. A sketch of it might looks like this:


retn (firstStepGroupFn request readJson at)
|> apply (getHandler request.Path |> Result.tee logHandler)
|> apply (getUser request.User |> Result.tee logUser)
|> apply (authorize handlerPath user |> Result.tee logAuthorize)
|> bind (
    retn (secondStepGroupFn ...)
    >> apply ...
    )


We get into a little bit of Type Tetris here, because some results are Async but others are not. So all of the individual steps will have to be lifted to AsyncResult if they weren't already.

This is what it would look like explicitly, but we could make it look much nicer by creating adapter functions. We could create adapter functions that did the lifting and tee off logs. We could then create adapter functions for the groups of steps (firstStepGroupFn, secondStepGroupFn, etc). The step group functions are required because some steps require values computed from the combination of earlier ones.

Changing this kind of structure is a challenge. For instance, we use the request data most of the way down the pipeline. So it would have to be passed into each successive function. If we later changed how we used that data, we would have to touch a number of functions. We may even have to change the way groups of steps are structured. The compiler will aid us there, but it's still tedious.


Reflecting on my Cliffs of Insanity solution... I think it's quite ugly, but it's also the simplest to understand and to change of the alternatives shown. It also makes sense that this solution could be the right answer because the edges of the system are where side effects happen. And indeed most of the code above is explicitly modeling all the ways that IO and interop can fail. For the core of an app where functions should be pure, this kind of solution would be a bad fit. But maybe, just maybe, it's good at the edges.

Of course, there may be a functional pattern that fits here that I just haven't considered. I will continue to learn and be on the lookout for better alternatives.

02 August 2016

React/Redux coming from CQRS/ES

Looking at React + Redux, there is a noticeable similarity to CQRS + ES. These front-end and back-end concepts being aligned may be helpful for those that cross the boundary between front and back. However, there are some subtle differences that make the concepts "not quite" fit. Let's explore that.


Actions are Events

This can't be any more explicit than what is stated in the Redux documentation.
Actions describe the fact that something happened
http://redux.js.org/docs/basics/Reducers.html

Action Creators are Commands... and Their Handlers

The command and the command handler are squashed into the same concept. The "command" doesn't travel outside the application, so there's less need to convert it to a distinct command message. In fact, doing so feels awkward and redundant due to the next point.

An event (aka action) is almost always generated by a handler. One of the main reasons commands can fail on the back-end is because they are not trusted, so the code inside the handler must validate the command (aka protect invariants). On the front end, the command is considered trusted because the components + state are protecting the invariants. For example, you won't be able to issue a command (the button will be disabled) if the invariant is violated. (If you can, it's considered a bug.)

There's also how errors are handled. They are typically considered events on the UI (user needs to be notified), whereas command handlers typically just return errors as responses without affecting the domain.

What remains the same about the command handler between front- and back-ends is that the handler manages dependencies for the use case. On the front end that often takes the form of wrangling HTTP API calls.


Store is a View Database, Reducers are View Updaters

This is evident from the note on Reducers.
As your app grows, instead of adding stores, you split the root reducer into smaller reducers independently operating on the different parts of the state tree.
http://redux.js.org/docs/api/Store.html in A Note for Flux Users
Essentially, different properties off of the Store's state represent different views. A reducer is responsible for updating its own view.


But keep in mind...

I'm just getting started with React/Redux. These are mental models based on an understanding of CQRS/ES. "All models are wrong. Some of them are useful." (George E. P. Box) This doesn't map all the odds and ends from CQRS/ES and friends, but hopefully it's useful to you.