11 November 2016

API edge cases revisited

For pipeline steps, nesting is often better than chaining.

In my last post, I talked about the difficulty of chaining together steps in an API and explored a way to construct an API pipeline that was more flexible to change.

However, it has two problems. 1) There is a lot of boilerplate, especially the error branches. 2) It looks horrible.

So, I experimented and found another way that seems much better. I actually discovered this idea when working with Elm. However, I couldn't implement it in quite the same way. The syntax I was going for looked something like this:

AsyncResult.retn request
|> AsyncResult.bind (fun request -> getHandler request.Path
|> AsyncResult.bind (fun (handlerPath, handle) -> getUser request.User
))


However, this does not compile in F# because the parser doesn't like the spacing. Also the parenthesis become a bit bothersome over time. So then I tried using the inline operator for bind (>>=), and eventually I stumbled upon a style that I found amenable. Here is the code for my query API pipeline. Look how fun it is.

let (>>=) x f =
    AsyncResult.bind f x

let run connectionString (request:Request) (readJsonFunc:ReadJsonFunc) =
    let correlationId = Guid.NewGuid()
    let log = Logger.log correlationId
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
 
    AsyncResult.retn request
    >>= fun request ->
        log <| RequestStarted (RequestLogEntry.fromRequest request)
        getHandler request.Path
 
    >>= fun (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        getUser request.User
 
    >>= fun user ->
        log <| UserFound (user.Identity.Name)
        authorize handlerPath user
 
    >>= fun claim ->
        log <| OperationAuthorized claim
        getJson readJson ()
 
    >>= fun json ->
        log <| JsonLoaded json
        let jsonRequest = JsonRequest.mk user json
        handle connectionString jsonRequest
 
    >>= fun (resultJson, count) ->
        log <| QueryFinished count
        toJsonResponse resultJson
 
    |> AsyncResult.teeError (ErrorEncountered >> log)
    |> AsyncResult.either id Responder.toErrorResponse
    |> Async.tee (ResponseCreated >> log)
    |> Async.StartAsTask


This style is a vast improvement in readability as well as (lack of) boilerplate. Now each step is actually nested, but F# lets me write them without nested indentation.

The primary value proposition of nested binds (e.g. x >>= fun x' -> f1 x' >>= f2) instead of chain binds (e.g. x >>= f1 >>= f2) is the easy access to all previous step results. For example, handle is defined in the 2nd step but is used in the 5th step. Notice that I could easily swap steps 2 and 3 without affecting any subsequent steps. (Select/cut getHandler down to thru the log statement, and paste it below the UserFound log statement. No other refactoring needed!)

If I were to do chaining, Steps 3 and 4 would have to carry handle through their code into their output so that Step 5 has access to it. This creates coupling between steps, as well as extra data structures (tuple or record passed between steps) that need maintenance when steps change.

I think the next thing this needs is inlining the logging. But for now, I'm pretty happy with it.

For reference, here is the old version of the code which enumerates every branch explicitly. (Also uses nesting instead of chaining.)

let run connectionString (request:Request) (readJsonFunc:ReadJsonFunc) =
    let correlationId = Guid.NewGuid()
    let log = Logger.log correlationId
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
    let logErr = tee (ErrorEncountered >> log) >> Responder.toErrorResponse
    let logResponse = ResponseCreated >> log
    async {
        log <| RequestStarted (RequestLogEntry.fromRequest request)
        match getHandler request.Path with
        | Error err ->
            return logErr err
 
        | Ok (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        match getUser request.User with
        | Error err ->
            return logErr err
 
        | Ok user ->
        log <| UserFound (user.Identity.Name)
        match authorize handlerPath user with
        | Error err ->
            return logErr err
 
        | Ok claim ->
        log <| OperationAuthorized claim
        let! jsonResult = getJson readJson ()
        match jsonResult with
        | Error err ->
            return logErr err
 
        | Ok json ->
        log <| JsonLoaded json
        let jsonRequest = JsonRequest.mk user json
        let! outEventsResult = handle connectionString jsonRequest
        match outEventsResult with
        | Error err ->
            return logErr err
 
        | Ok (resultJson, count) ->
            log <| QueryFinished count
            return Responder.toJsonResponse resultJson
 
    }
    |> Async.tee logResponse
    |> Async.StartAsTask

17 September 2016

Functional Programming Edge Case: API Edges

Update: See this followup post for a more elegant style that still provides the benefits of nesting vs chaining.

So I've run across a modeling problem where the "proper functional way" is unsatisfactory. I experimented for several days on alternatives. In the end the Pyramid of Doom prevailed. Since F# is whitespace-formatted, I suppose they are more like "Steps of Doom". This is a command handling pipeline for an API.


let run store request (readJsonFunc:ReadJsonFunc) at =
    let readJson = fun _ -> readJsonFunc.Invoke() |> Async.AwaitTask
    let logErr = tee (ErrorEncountered >> log) >> Responder.toErrorResponse
    let logResponse = flip tuple DateTimeOffset.Now >> ResponseCreated >> log
    async {
        log <| RequestStarted (request, at)
        match getHandler request.Path with
        | Error err ->
            return logErr err
 
        | Ok (handlerPath, handle) ->
        log <| HandlerFound handlerPath
        match getUser request.User with
        | Error err ->
            return logErr err
 
        | Ok user ->
        let tenantId = getClaimOrEmpty Constants.TenantClaimKey user
        let userId = getClaimOrEmpty Constants.UserIdKey user
        log <| UserFound (tenantId, userId)
        match authorize handlerPath user with
        | Error err ->
            return logErr err
 
        | Ok claim ->
        log <| OperationAuthorized claim
        let! jsonResult = getJson readJson ()
        match jsonResult with
        | Error err ->
            return logErr err
 
        | Ok json ->
        log <| JsonLoaded json
        match deserializeMeta json with
        | Error err ->
            return logErr err
 
        | Ok meta ->
        log <| RequestMetaDeserialized meta
        match checkTenancy user meta.TargetId with
        | Error err ->
            return logErr err
 
        | Ok x ->
        log <| TenancyValidated x
        // TODO page result from event store
        let! loadResult = loadEvents store meta.TargetId
        match loadResult with
        | Error err ->
            return logErr err
 
        | Ok slice ->
        log <| EventsLoaded (slice.FromEventNumber, slice.NextEventNumber, slice.LastEventNumber)
        match checkConcurrency meta.Version slice.LastEventNumber with
        | Error err ->
            return logErr err
 
        | Ok version ->
        log <| RequestVersionMatched version
        match deserializeSlice slice with
        | Error err ->
            return logErr err
 
        | Ok inEvents ->
        log <| EventsDeserialized inEvents
        let! outEventsResult = runHandler (meta:RequestMeta) (request:Request) user json handle inEvents
        match outEventsResult with
        | Error err ->
            return logErr err
 
        | Ok outEvents ->
        log <| HandlerFinished outEvents
        match outEvents with
        | [] ->
            log <| NoEventsToSave
            return Responder.noEventResponse ()
 
        | _ ->
        let eventMeta = createEventMeta tenantId userId
        match serializeEvents meta.TargetId meta.Version meta.CommandId request.CorrelationId eventMeta outEvents with
        | Error err ->
            return logErr err
 
        | Ok eventDatas -> // bad grammar for clarity!
        log <| EventsSerialized
        let! eventSaveResult = save store meta.TargetId meta.Version eventDatas
        match eventSaveResult with
        | Error err ->
            return logErr err
 
        | Ok write ->
            log <| EventsSaved (write.LogPosition.PreparePosition, write.LogPosition.CommitPosition, write.NextExpectedVersion)
            return Responder.toEventResponse ()
    }
    |> Async.tee logResponse
    |> Async.StartAsTask


Ok, so the Steps of Doom are invisible here because F# does not require me to indent nested match statements. Only the return expression requires indentation. Maybe it's more of the Cliffs of Insanity.

Now before you reach for your pitch fork and torch, let me explain how I got there and why it may really be the best choice here. (Not in general though.)

Let's talk about how to change this code. I can insert/remove/edit a step at the appropriate place in the chain (above another match expression), then fix or add affected variable references in the same function. That's it. Now let me describe or show some alternatives I've looked at.


The standard functional way to represent "callbacks" is with monads. Inside the above, you can see that I'm already using Result (aka Either). But that one alone is not sufficient, since I need to keep values from many previous successes. I also need to do some logging based on those values. And some of the calls are Async as well. So I would need some combination of Async, Result, and State. Even if I was interested in building such a franken-monad, it still doesn't solve the problem of representing this pipeline as one state object. Consider what the pipeline state object might look like:


type PipelineState =
    { Request: Request
      ReadJson: unit -> Async
      // everything else optional!
      HandlerPath: string option
      ...
      ...
      ...
      ...
      ...
    }


Working with this state object actually makes the pipeline harder to reason about. The user of this object can't be sure which properties are available at which step without looking at the code that updates it. That's quite brittle. Updating the pipeline requires updating this type as well as code using it.

You could eliminate the question of whether properties were available at a given moment by nesting states and/or creating types per pipeline step. But then you have a Pyramid of Doom based on Option (aka Maybe). Updating the code around this is also quite a chore with all the types involved.

Instead of keeping a pipeline state object, you could use an ever-growing tuple as the state. This would make it easier to tell what was available at what step. However, this has a very large downside when you go to change the pipeline. Anytime you modify a pipeline step and its corresponding value, you have to modify it in all subsequent steps. This gets quite tedious.


I tried something similar to the tuple method, but with a list of events instead. I was basically trying to apply Event Sourcing to make the events both the log and the source of truth for the pipeline. I quickly realized updating state wasn't going to work out, so I used pattern matching on lists to get previous values as needed. However, it suffered from the same problem as the growing tuple method, plus it allowed for unrepresentable states (unexpected sequences in the list). This is pulled from an older version with slightly different pipeline steps.


let step events =
    match events with
    | EventsSaved _ :: _ ->
        createResponse Responder.toEventResponse ()
 
    | NoEventsToSave :: _ ->
        createResponse Responder.noEventResponse ()
    
    | ErrorEncountered error :: _ ->
        createResponse Responder.toErrorResponse error
 
    | [ RequestStarted (request, _, _) ] ->
        Async.lift getHandler request.Path
 
    | [ HandlerFound _; RequestStarted (request, _, _) ] ->
        Async.lift getUser request.User
 
    | [ ClaimsLoaded user; HandlerFound (handlerPath, _); _ ] ->
        Async.lift2 authorize handlerPath user
 
    | [ RequestAuthorized _; _; _; RequestStarted (_, readJson, _) ] ->
        getJson readJson ()
 
    | [ JsonLoaded json; _; _; _; _ ] ->
        Async.lift deserializeMeta json
 
    | [ RequestMetaDeserialized meta; _; _; _; _; _ ] ->
        loadEvents meta.TargetId
 
    | [ EventsLoaded es; RequestMetaDeserialized meta; _; _; _; _; _ ] ->
        Async.lift2 checkConcurrency meta.Version <| List.length es
 
    | [ ConcurrencyOkay _; EventsLoaded es; RequestMetaDeserialized meta; JsonLoaded json; _; ClaimsLoaded user; HandlerFound (_, handle); RequestStarted (request, _, _) ] ->
        runHandler meta request user json handle es
 
    | [ HandlerFinished es; _; _; _; _; ClaimsLoaded user; _; RequestStarted (request, _, _) ] ->
        save request user es
 
    | _ ->
        Async.lift unexpectedSequence events


As you can tell, the above has a number of readability issues in addition to the challenges listed.


So probably the most compelling alternative would be to create one large method which takes in all the values and uses Retn + Apply to build up the function until its ready to run. A sketch of it might looks like this:


retn (firstStepGroupFn request readJson at)
|> apply (getHandler request.Path |> Result.tee logHandler)
|> apply (getUser request.User |> Result.tee logUser)
|> apply (authorize handlerPath user |> Result.tee logAuthorize)
|> bind (
    retn (secondStepGroupFn ...)
    >> apply ...
    )


We get into a little bit of Type Tetris here, because some results are Async but others are not. So all of the individual steps will have to be lifted to AsyncResult if they weren't already.

This is what it would look like explicitly, but we could make it look much nicer by creating adapter functions. We could create adapter functions that did the lifting and tee off logs. We could then create adapter functions for the groups of steps (firstStepGroupFn, secondStepGroupFn, etc). The step group functions are required because some steps require values computed from the combination of earlier ones.

Changing this kind of structure is a challenge. For instance, we use the request data most of the way down the pipeline. So it would have to be passed into each successive function. If we later changed how we used that data, we would have to touch a number of functions. We may even have to change the way groups of steps are structured. The compiler will aid us there, but it's still tedious.


Reflecting on my Cliffs of Insanity solution... I think it's quite ugly, but it's also the simplest to understand and to change of the alternatives shown. It also makes sense that this solution could be the right answer because the edges of the system are where side effects happen. And indeed most of the code above is explicitly modeling all the ways that IO and interop can fail. For the core of an app where functions should be pure, this kind of solution would be a bad fit. But maybe, just maybe, it's good at the edges.

Of course, there may be a functional pattern that fits here that I just haven't considered. I will continue to learn and be on the lookout for better alternatives.

02 August 2016

React/Redux coming from CQRS/ES

Looking at React + Redux, there is a noticeable similarity to CQRS + ES. These front-end and back-end concepts being aligned may be helpful for those that cross the boundary between front and back. However, there are some subtle differences that make the concepts "not quite" fit. Let's explore that.


Actions are Events

This can't be any more explicit than what is stated in the Redux documentation.
Actions describe the fact that something happened
http://redux.js.org/docs/basics/Reducers.html

Action Creators are Commands... and Their Handlers

The command and the command handler are squashed into the same concept. The "command" doesn't travel outside the application, so there's less need to convert it to a distinct command message. In fact, doing so feels awkward and redundant due to the next point.

An event (aka action) is almost always generated by a handler. One of the main reasons commands can fail on the back-end is because they are not trusted, so the code inside the handler must validate the command (aka protect invariants). On the front end, the command is considered trusted because the components + state are protecting the invariants. For example, you won't be able to issue a command (the button will be disabled) if the invariant is violated. (If you can, it's considered a bug.)

There's also how errors are handled. They are typically considered events on the UI (user needs to be notified), whereas command handlers typically just return errors as responses without affecting the domain.

What remains the same about the command handler between front- and back-ends is that the handler manages dependencies for the use case. On the front end that often takes the form of wrangling HTTP API calls.


Store is a View Database, Reducers are View Updaters

This is evident from the note on Reducers.
As your app grows, instead of adding stores, you split the root reducer into smaller reducers independently operating on the different parts of the state tree.
http://redux.js.org/docs/api/Store.html in A Note for Flux Users
Essentially, different properties off of the Store's state represent different views. A reducer is responsible for updating its own view.


But keep in mind...

I'm just getting started with React/Redux. These are mental models based on an understanding of CQRS/ES. "All models are wrong. Some of them are useful." (George E. P. Box) This doesn't map all the odds and ends from CQRS/ES and friends, but hopefully it's useful to you.

31 May 2016

Re Simplifying Message Handlers

In a previous post, Simplifying Message Handlers, I put forth a way to remove unnecessary interface declarations for message handlers (commonly seen in CQRS examples). I provided the research I used and left the implementation as an exercise to the reader. Today I'm going to revisit that topic. I'll explain my experiences with this process which led me to believe it was a bad idea. I'll also explain my current method of tying messages to code.

So wiring up with reflection and marker interfaces is very clever. (note: clever is a developer curse word.) So clever in fact, that it remained a mystery to my co-worker even after I explained it multiple times. She would always ask a perfectly reasonable question like: "Ok, but what causes this code to run?" Then I would show her the API call, which consults the handler collection to figure out who needs to get the message. That would lead to showing the bootstrap code, which called the reflection code to find all the handlers. By then her eyes were glaze over, and she would just label it "magic" and move on. The reflection code was just this side of inscrutable... relying on a small amount of arcane knowledge of .NET CLR internals. So it was hard to understand without actually debugging it or already knowing how reflection works.

The end result was that even after multiple explanations, she was afraid to touch anything in the project for fear of using the wrong incantation (marker interface and method declaration in this case). Needless to say, this presented some challenges. It became so that all items dealing with X were mine and Y were hers... exactly what you are supposed to combat on an Agile team. This is not even to mention the tooling issues, and production issues. In particular, we discovered the hard way that when an IIS application pool wakes from idle sleep it only loads assemblies which are directly referenced. This is different from its startup behavior where it loads all deployed assemblies into the app domain. So we would deploy and everything would work fine, then when we all go home and India starts work, the app would wake from sleep and crash with a TypeLoadException. As for tooling issues, I had to become blind to "this code is not referenced anywhere" type of warnings and get used to not being able to Go To Definition in certain cases.

So what do I do now? The most boring thing possible... I manually wire things up. It's crystal clear and leaves a reference trail that is easily discoverable by other developers and tooling. Once you settle on that, the remaining issue is optimizing for manual wiring. It is ideal to have only one place where you wire your implementations to the handling infrastructure.

03 February 2016

PC Building - Features I don't need

So I've nearly come to the end of planning my next PC build. (Let's be honest, the only reason I planned this thoroughly was because I was waiting for my tax refund.) And I've come to some conclusions about certain features and why I don't need them.

Thunderbolt

See my previous comments on the topic. It's mostly a mobile-oriented feature. USB 3.1 Gen2 is sufficient.

PEX Chips

These are also known as "PLX" (the name of the manufacturer) chips, My original thought here was to get an ASUS Z170-WS, which includes more electrical PCIe lanes (via a PEX chip). Then I could have more PCIe lanes for future expansion. However, after finding the electrical diagram in the manual, I realized that it was little more than a switch with an uplink speed still limited by the CPU's PCIe lanes. So even though you can run "SLI x16/x16", all those connections are still only sharing 16 lanes back to the CPU. One advantage the PEX chip might have (if designed like a network switch) is increased GPU to GPU bandwidth. However, I haven't looked into the chip architecture, so I don't know that for sure. I'm also not planning to run SLI anyway.

At the end of the day, you're not actually gaining PCIe bandwidth. You're still doing the same old splitting of the CPU's PCIe lanes, but in a more flexible way and at the cost of dollars and negligible latency.

Multiple M.2 ports

At first, I was scrambling to find motherboards with dual m.2 ports so I could (one day) put two Samsung 950 Pros in a RAID0. However, I later realized that this would be bottlenecked by the DMI 3.0 interface. There is a great article on these drives here. Writes do nearly double in RAID0, but reads are only increased by 1.4 or so due to DMI limits. And those results were likely not exercising the other devices which share DMI bandwidth: USB, LAN, other SATA devices... pretty much everything that's not a GPU PCIe slot. Also their real world tests show little difference between RAID0 and single-drive performance. In other words, having them in a RAID-0 vs a single drive makes little practical difference. Even if you had a workload that could notice a difference, m.2 bandwidth is still bottlenecked by DMI.

Now at this point, I realized to break the barriers, I would need to plug into a PCIe slot that ran straight to the CPU. Even ignoring the fact that I am stealing 8 lanes from the GPU (and that assuming I have some GPU in the future which can use more than 8), motherboards have problems booting from those. (Yes, manufacturers can put an HBA chip on there to make it bootable, but that adds to already-astronomical cost.) So basically, we're going to have to wait for chipset tech to catch up with storage tech and then buy new motherboards. From what I've seen about the upcoming chipsets, I don't believe it'll be this year. Although Optane is a bit of a wildcard. I can see PCIe SSDs being the most immediate step for it. NVRAM DIMMs could be a longer-term proposition requiring OS support and/or motherboard mfg development to work the kinks out over the new few years.

SLI/CrossFire

Like many, I bought a graphics card intending to later buy its twin for SLI or CrossFire and extend the useful gaming life of my rig. I have been meaning to do this for several generations now, but it has never happened. The reason is because graphics cards are advancing rapidly enough that by the time I need to update my rig with better graphics, there is always a sufficient single card upgrade. Considering that most graphics cards don't lose any performance going from x16 to x8 lanes, we still have a lot of room for single-card upgrades in the future. If the new graphics card can saturate my PCIe bus, then it's probably time for an computer upgrade anyway. (For example, my 6 year old computer with PCIe 2.0 may be at saturation with current gen GPUs... and its time for an upgrade.)

Now even if I was keeping less of an eye to performance per dollar, multi-GPU has some inherent downsides. Here is a great review on SLI perf. Games may not support it. Even if the game will use it the experience may be sub-par, with reports of graphical issues or even just no real difference in FPS. It's one of those "when it works, it's great." So the value proposition becomes even worse for multi-gpu.

That's not to say that I will never go multi-GPU... I will just no longer buy a less-powerful GPU with the plan of later buying another to make up the performance difference.

Note that I don't do video content creation/rendering, where an SLI/CrossFire setup could be consistently beneficial.

SATA Express

It's a dead spec, but you still see it on motherboards (whether you want it or not) because it was still a thing when the last motherboard design cycle started. There are no drives which use it. One of these connectors can be looked at as just 2 SATA ports. Although the most ingenious use of this port I have seen so far was by ASRock, who used one of these as a header for USB 3.1 front panel on it's Extreme+ Z170 motherboards.

NIC Teaming

Some people are interested in dual LANs for NIC Teaming. But unless we're talking about a server, NIC Teaming serves no real purpose. And in fact it can cause more problems than using a single NIC, due to having to carefully configure it for your usage. You can't just check a box and it magically works. Obviously, it doesn't increase your internet bandwidth. It also doesn't increase connection speeds to individual computers. Those are still limited by the other computer's link speed. At best it only allows more computers to connect to you at a time using their max speed. Those hoping for a gaming advantage will be sorely disappointed.

Redundancy is also a false advantage for gamers. The chances of your NIC going out are low to start with. And when it happens you probably want to know about it so you can a) unplug it since it can spam your network with garbage packets and b) disable it in the BIOS. To be fault tolerant to network problems and not just NIC failures, you are also going to need a whole redundant infrastructure (each port plugged into a different switch, each switch connected to a different internet provider, etc). I don't know anyone who goes to that expense at their home.

Having 2 NICs is nice in general, but having your sole NIC die (while the rest of the motherboard manages to be fine) is hardly much of a problem. It's a pretty simple matter to grab a PCIe x1 NIC like this one and be back on your merry way.

That's all I can think of for now...

22 January 2016

Building a new PC -- should I care about Thunderbolt 3? No.

I am looking at building a new PC, and I came across this question myself. Recently, some motherboard manufacturers have announced support for Thunderbolt 3. And what's not to love about it? Very high speed and backward- (or is it sideways-?) compatible with USB 3.1.

The issue is that you can't get everything you might want on one motherboard. The Intel Z170 chipset found with the TB3 Alpine Ridge controller has a limited number of PCI Express lanes for IO. It's 26 to be precise, 6 of which are basically reserved for USB 3.0 and interconnects, so 20 usable. So these 20 lanes have to be divvied up between PCIe slots, storage (including SATA and PCIe-based storage like m.2 and u.2), networking, Thunderbolt controllers, USB 3.1 controllers, etc. (Graphics cards still use the separate 16 lanes provided by the CPU.)

The Alpine Ridge Thunderbolt 3 controller takes up 4 PCIe lanes, which would only be ~32Gbps of the advertised 40Gbps, but it also hijacks the Display Port interconnect from the CPU to make up the difference. This is why I doubt we will ever see full-speed Thunderbolt 3 PCIe x4 cards. To get full speed, it will have to go in an x8 slot and split bandwidth with the GPU, as chipset PCIe lanes are only in x4 groupings.

So to get TB3, you will have to give up something... like an m.2 slot or 4 SATA ports or that 3rd or 4th (Crossfire) GPU or just an extra PCIex4 slot. But the question is whether it's worth trading towards. So, I looked at the proposed uses for Thunderbolt 3 through the lens of an enthusiast PC user and detailed my observations below. Note that these use cases could go quite differently for mobile or small form factor users. In fact the use as a docking station port for laptops is very compelling.

External Storage
This is perhaps the most likely enthusiast use of TB3. However, I see this only being used in specific cases where storage is a bottleneck (video content creation, for example). I don't see this being used by an average geek like me. My storage server is (would be) mainly for centralization purposes with large capacity disks. So, it doesn't make sense to build a new PC with TB3 for this purpose. I build the new PC for use as my workstation, then put the old one on storage duty. So my storage server isn't going to have TB3 anytime soon. Even if the old computer had it, I don't need to buy an external TB3 enclosure when my old computer has connections and room for drives internally.

Displays
This use case is not interesting for a desktop enthusiast/gaming PC. Monitors are plugged into dedicated graphics cards. These tend to directly shove pixels out to displays themselves for maximum performance. I can't imagine a graphics card mfg who wants to route their output through a Thunderbolt controller for output (risking performance), when they could send it to the display directly. It could be that graphics cards will eventually have TB3 video ports, but that's not anything I need to have on my motherboard.

Device Charging
Only for mobile scenarios. I don't have a pressing need for this feature when wall sockets still exist and are in more rooms than my computer.

USB3.1 (Gen2) compatibility
This is nice, but type C USB connectors are already present on most current-generation motherboards without TB3 for less IO budget.

External graphics
This is really only useful for mobile scenarios. My desktop already has a place set for a video card or two.

Thunderbolt Networking
The "for-free" scenario mentioned in press release was connecting 2 computers with a TB3 cable. That could be interesting for transferring data in limited scenarios (e.g. support), but will require software to make it work and TB3 becoming ubiquitous to make it even remotely likely. Using TB3 to hook into a larger network is far from "for-free". You'll likely buy a TB3 to RJ45 converter in that case. Your other option would be a switch with a TB3 connector, which is made less likely by the fact that TB3 cables top out at about 3m or 10ft. Increased distance optical TB3 cables could be a thing in the future, but will likely be much more expensive than an adapter + CAT6 cable.


In all, I don't imagine myself ever using Thunderbolt 3 from my desktop PC, so despite my initial inclination, I'm not going to aim for it when building a new PC. However, I will look for it on my wife's next laptop.