Thunderbolt
See my previous comments on the topic. It's mostly a mobile-oriented feature. USB 3.1 Gen2 is sufficient.
PEX Chips
These are also known as "PLX" (the name of the manufacturer) chips, My original thought here was to get an ASUS Z170-WS, which includes more electrical PCIe lanes (via a PEX chip). Then I could have more PCIe lanes for future expansion. However, after finding the electrical diagram in the manual, I realized that it was little more than a switch with an uplink speed still limited by the CPU's PCIe lanes. So even though you can run "SLI x16/x16", all those connections are still only sharing 16 lanes back to the CPU. One advantage the PEX chip might have (if designed like a network switch) is increased GPU to GPU bandwidth. However, I haven't looked into the chip architecture, so I don't know that for sure. I'm also not planning to run SLI anyway.
At the end of the day, you're not actually gaining PCIe bandwidth. You're still doing the same old splitting of the CPU's PCIe lanes, but in a more flexible way and at the cost of dollars and negligible latency.
Multiple M.2 ports
At first, I was scrambling to find motherboards with dual m.2 ports so I could (one day) put two Samsung 950 Pros in a RAID0. However, I later realized that this would be bottlenecked by the DMI 3.0 interface. There is a great article on these drives here. Writes do nearly double in RAID0, but reads are only increased by 1.4 or so due to DMI limits. And those results were likely not exercising the other devices which share DMI bandwidth: USB, LAN, other SATA devices... pretty much everything that's not a GPU PCIe slot. Also their real world tests show little difference between RAID0 and single-drive performance. In other words, having them in a RAID-0 vs a single drive makes little practical difference. Even if you had a workload that could notice a difference, m.2 bandwidth is still bottlenecked by DMI.
Now at this point, I realized to break the barriers, I would need to plug into a PCIe slot that ran straight to the CPU. Even ignoring the fact that I am stealing 8 lanes from the GPU (and that assuming I have some GPU in the future which can use more than 8), motherboards have problems booting from those. (Yes, manufacturers can put an HBA chip on there to make it bootable, but that adds to already-astronomical cost.) So basically, we're going to have to wait for chipset tech to catch up with storage tech and then buy new motherboards. From what I've seen about the upcoming chipsets, I don't believe it'll be this year. Although Optane is a bit of a wildcard. I can see PCIe SSDs being the most immediate step for it. NVRAM DIMMs could be a longer-term proposition requiring OS support and/or motherboard mfg development to work the kinks out over the new few years.
Like many, I bought a graphics card intending to later buy its twin for SLI or CrossFire and extend the useful gaming life of my rig. I have been meaning to do this for several generations now, but it has never happened. The reason is because graphics cards are advancing rapidly enough that by the time I need to update my rig with better graphics, there is always a sufficient single card upgrade. Considering that most graphics cards don't lose any performance going from x16 to x8 lanes, we still have a lot of room for single-card upgrades in the future. If the new graphics card can saturate my PCIe bus, then it's probably time for an computer upgrade anyway. (For example, my 6 year old computer with PCIe 2.0 may be at saturation with current gen GPUs... and its time for an upgrade.)
Now even if I was keeping less of an eye to performance per dollar, multi-GPU has some inherent downsides. Here is a great review on SLI perf. Games may not support it. Even if the game will use it the experience may be sub-par, with reports of graphical issues or even just no real difference in FPS. It's one of those "when it works, it's great." So the value proposition becomes even worse for multi-gpu.
That's not to say that I will never go multi-GPU... I will just no longer buy a less-powerful GPU with the plan of later buying another to make up the performance difference.
Note that I don't do video content creation/rendering, where an SLI/CrossFire setup could be consistently beneficial.
SATA Express
It's a dead spec, but you still see it on motherboards (whether you want it or not) because it was still a thing when the last motherboard design cycle started. There are no drives which use it. One of these connectors can be looked at as just 2 SATA ports. Although the most ingenious use of this port I have seen so far was by ASRock, who used one of these as a header for USB 3.1 front panel on it's Extreme+ Z170 motherboards.
NIC Teaming
Some people are interested in dual LANs for NIC Teaming. But unless we're talking about a server, NIC Teaming serves no real purpose. And in fact it can cause more problems than using a single NIC, due to having to carefully configure it for your usage. You can't just check a box and it magically works. Obviously, it doesn't increase your internet bandwidth. It also doesn't increase connection speeds to individual computers. Those are still limited by the other computer's link speed. At best it only allows more computers to connect to you at a time using their max speed. Those hoping for a gaming advantage will be sorely disappointed.
Redundancy is also a false advantage for gamers. The chances of your NIC going out are low to start with. And when it happens you probably want to know about it so you can a) unplug it since it can spam your network with garbage packets and b) disable it in the BIOS. To be fault tolerant to network problems and not just NIC failures, you are also going to need a whole redundant infrastructure (each port plugged into a different switch, each switch connected to a different internet provider, etc). I don't know anyone who goes to that expense at their home.
Having 2 NICs is nice in general, but having your sole NIC die (while the rest of the motherboard manages to be fine) is hardly much of a problem. It's a pretty simple matter to grab a PCIe x1 NIC like this one and be back on your merry way.
That's all I can think of for now...
No comments:
Post a Comment