Pages

Saturday, August 19, 2023

What I want out of computer hardware reviewers

Since this is apparently becoming an increasingly sporadic and PC building-focused blog, I feel compelled to comment on the recent controversy surrounding LTT and Linus Media Group's hardware reviews and other practices. GamersNexus' video lays it all out nicely.


First, some quick takes on the controversy:

  • Full disclosure: I'm a huge admirer of what LMG has built and, in general, the way they've grown and run their business. Building what Linus and his team have built is no insignificant achievement and the rising tide they've created in the tech YouTuber space has risen a lot of boats.
  • While I may not agree with every position he takes or decision he makes, I believe Linus to be a highly ethical person who operates from a strong personal moral compass. Again, his compass and mine don't align 100% of the time, but I'm saying I think he is a scrupulous dude.
  • That being said, I do think LMG's 'quantity over quality' approach is leading to many of the errors and questionable behavior that Steve is talking about. As the LMG team says for themselves, that strategy probably made sense as LMG was growing, but it's not clear that it's necessary or optimal now that the company is worth over $100mm.
  • Being that big creates an obligation for LMG to recognize that its actions and mistakes can have a massive impact on smaller partners, businesses and other creators. This is the focus of GN's criticisms in the second part of the video and the part that resonates most deeply with me.
  • Parenthetically, this sort of takedown piece is very on-brand for GN. There's a lot GN does that I find valuable, but the 'self-appointed guardian of ethics in the PC hardware community' shtick wears thin sometimes.
What I find more interesting is the thread of the discussion (addressed in the GN video, LMG's reply and this one from Hardware Unboxed) about hardware review and testing practices. GN and Hardware Unboxed, among many others, trade on the accuracy and rigor of their testing. LTT Labs is an attempt to do the same thing and bring LMG into that space. These outlets develop elaborate testing practices. They conduct dozens of benchmarks and hundreds of test runs for significant hardware releases. They have strong opinions about testing methodology and boast of their experience.

The LMG controversy has me wondering how valuable that work is, though, even to PC building enthusiasts. It's got me thinking about what I actually care about when, say, a new generation of CPUs or GPUs comes out; or when an interesting new piece of hardware is released.

Specifically, I'm talking about what sort of testing is useful, particularly in the context of day-one reviews. This kind of coverage and testing isn't what I personally gravitate towards in this space: that would be the more entertaining, wacky, oddball stuff that I think nobody covers better than LMG at its best. I'm talking about the kind of coverage that major component categories get around new launches: CPUs, GPUs, cases, CPU coolers and, to a lesser extent, power supplies.

The day-one review context is also important, because it imposes certain constraints on the coverage, some of which limit the possibilities and, frankly, the value of rigorous testing:

  • The reviewer is typically working with only one review sample of the product;
  • That review sample is provided by the manufacturer relatively close to the product launch, limiting the time the reviewer has to test and evaluate the product;
  • The reviewer is under an NDA and embargo (usually lasting until the product launch date), limiting the reviewers' ability to share data and conclusions with each other during the narrow window the day-one reviewers have to test.

First of all, for all these component categories, I'd like to know if the product suffers from a fatal flaw. This might be either a fatal design flaw that is apparent from the spec sheet (e.g. the limited memory bandwidth of lower-end 40-series GPUs) or something that is only uncovered through observation (e.g. 12-volt high-power connectors causing fires on higher-end 40-series GPUs).

The thing is, though, neither of those types of flaws is identified through rigorous day-one testing. The design flaws are sometimes apparent just from the spec sheet. In other cases, the spec sheet might raise a suspicion and some testing -- perhaps a customized regimen specific to ferreting out the suspicion -- is needed. And often, some level of expertise is required to explain the flaw. These are all valuable services these tech reviewers provide, but they are, by and large, not about rigorous testing.

Flaws that can only be detected through observation are rarely uncovered through the kind of rigorous testing these outlets do (and I don't think these outlets would claim differently). The typical pattern is that the product hits the market; users buy and use it; and some of them start to notice the flaw (or its effects). Then one or more of these outlets gets wind of it and does a rigorous investigation. This is an extremely valuable service these outlets provide (and also where GN really shines) but, again, you don't typically find it in a day-one review and it's not uncovered through testing.

I'm also looking for what I'll call 'spec sheet contextualization and validation.' I want to know what the manufacturer claims of the product in terms of features and performance. To the extent there's interesting, new, innovative or just unfamiliar stuff on the spec sheet, I'd love for it to be explained and contextualized. And I obviously want to know if the claim are to be believed. (There are also useful derivatives of the contextualization and validation that these reviewers often present and explain, for instance generational improvement, price-to-performance-ratio data and comparisons to competing products).

Some amount of testing is sometimes helpful for that contextualization and more or less required for validation. And particularly in the case of validation, some degree of well-designed-and-executed, standardized benchmarking is required. It makes sense to me, for example, for an individual reviewer to have a standardized test suite for new GPUs that uses a standardized hardware test bench and ~6-8 games that represent different performance scenarios (e.g. GPU-intensive, compute-intensive, etc.).

Things start to get questionable for me when outlets start to go much beyond this level though. The prime example of this is the component type where these outlets tend to emphasize the importance of their testing rigor the most: CPU coolers. To their credit, the reputable outlets recognize that getting accurate, apples-to-apples data about the relative performance of different coolers requires procedures and test setups that accommodate difficult-to-achieve controls for multiple variables: ambient temperature, case airflow, thermal compound application/quality, mount quality, noise levels and both the choice and thermal output of the system's heat-generating components, to name a few.

But the thing is, the multitude of factors to be controlled for under laboratory conditions undermines the applicability of those laboratory test results to actual use under non-laboratory conditions, possibly to the point of irrelevance. Hypothetically, let's assume that Cooler A (the more expensive cooler) keeps a given high-TDP CPU on average 3 degrees C cooler than Cooler B in a noise-normalized, properly controlled and perfectly conducted test. Here are a few of the factors that make it potentially difficult to translate that laboratory result to the real world:

  • Component selection: Though Cooler A outperforms Cooler B on the high-TDP CPUs reviewers typically used for controlled testing, the advantage might disappear with a lower-TDP CPU that both coolers can cool adequately. Alternatively, as we've seen with recent high-TDP CPUs, the limiting factor in the cooling chain tends not to be anything about the cooler (assuming it's rated for that CPU and TDP) but rather the heat transfer capacity of the CPU's IHS. I recently switched from a NH-U12S (with two 120mm fans) to an NH-D15 (with an extra fin stack and two 140mm fans) in my 5800X3D system and saw no improvement in idle thermals with the fans in both setups at 100% load, I suspect because of this very issue.
  • Mount quality: CPU coolers vary greatly in ease of installation. So even if Cooler A outperforms Cooler B when mounted properly, if Cooler A's mounting mechanism is significantly more error-prone (especially in the hands of an inexperienced user), that advantage may be lost. In fact, if Cooler B's mounting mechanism is significantly easier to use or less error-prone, it might actually outperform Cooler B for the majority of users because more of them will achieve a good mount. The same applies to...
  • Thermal compound application: Not only might a given user apply too much or too little thermal compound (where a reviewer is more likely to get it right), but, more deeply, the quality of the application and spread pattern can vary substantially between installation attempts, even among experienced builders, including, I would add, professional reviewers. Anyone who has built multiple PCs has had the experience of having poor CPU thermals, changing nothing about their setup other than remounting the CPU cooler (seemingly doing nothing differently) and seeing a multi-degree improvement in thermals. Outlets like GN providing contact heatmaps as part of their rigorous testing is a nod to this issue, but they typically only show the heatmaps for two different mounting attempts (at least in the videos), and that seems like too small a sample size to be meaningful. This brings up the issue of...
  • Manufacturing variance from one unit of the same product to another: At most, these outlets are testing two different physical units of the same product, and frequently just one. I don't know this, but I suspect that because good contact between the CPU heat spreader and cooler coldplate is such a key factor in performance, the quality and smoothness of the coldplate matters a lot, and is exactly the kind of thing that could vary from one unit to another due to manufacturing variance. All other things being equal, a better brand/sku of cooler will have less unit-to-unit variance, but the only way to determine this would be to test with far more than one or two units, which none of these reviewers does (and, indeed, none can do with just one review sample provided by the manufacturer). Absent that data, it's very similar to the silicon lottery with chips: your real-world mileage may vary if you happen to win (or lose) the luck-of-manufacturing draw.
  • Ambient temperature and environmental heat dissipation: Proper laboratory conditions for cooler testing involve controlling the ambient environmental temperature. That means keeping it constant throughout the test, which means that the test environment must have enough capacity to eliminate the heat the test bench is putting out (along with any other heat introduced into the test environment from the outside during the test period, like from the sun shining through the windows during the test). If the user's real-world environment also has this capacity, the test results are more likely to be applicable. If, on the other hand, the real-world environment can't eliminate the heat being introduced (say it lacks air conditioning, is poorly ventilated or has lots of heat being introduced from other sources), it changes the whole picture. Fundamentally, ambient temperature is a factor a responsible reviewer must control for in a scientific test. However, it is almost never controlled for in real-world conditions. And, arguably, the impact of uncontrolled ambient temperature is one of the most significant factors affecting quality of life in the real world (the other being noise, on which see below). From a certain point of view, PC cooling is about finding a balance where you get heat away from your components fast enough that they don't thermal throttle (or exhibit other negative effects of heat) but slow enough that you don't overwhelm the surrounding environment's ability to dissipate that heat. If the PC system outputs heat faster than the outside environment can dissipate it, the outside environment gets hotter, which sucks for your quality of life if you're also in that environment and trying to keep cool. This is why, considering only this issue, a custom water cooling solution with lots of fluid volume would yield a higher quality of life for most users than, e.g., a single tower air cooler. The greater thermal mass and conductivity of the fluid vs. the air cooler's heat pipes and fin stack allows for more heat to get away from the components quickly but remain internal to the system and then transferred into the environment over time, which is a better match for the primary ways we cool our environments (like air conditioning), which are better at dissipating relatively even, rather than spikey, heat loads.
  • Case and case airflow: I think this is by far the most significant factor in the real world. Any relative performance difference between Coolers A and B under laboratory conditions can easily be wiped out or reversed when either cooler is placed in a particular setup with particular airflow characteristics. Both coolers might perform great in a case with stellar airflow and perform poorly in one that is starved for airflow. But, more deeply, certain cooler designs perform better under certain case airflow conditions than others. An AIO where the radiator's fans can't create enough static pressure to overcome the case's airflow restrictions won't realize its full performance potential. Reviewers (rightly) try to create consistent test conditions that are fair to all the products being tested, but your setup probably isn't.
For these reasons, I regard relative performance data about different coolers under laboratory conditions as basically worthless, however rigorously collected it is. If I'm evaluating a cooler, what I actually care about are

  • The compatibility and rated performance of the cooler for a given CPU and case/mobo. This is spec sheet stuff, though some level of testing validation is valuable.
  • How easy and foolproof the mounting mechanism is, which is best surfaced through an on-camera build demonstration, not rigorous testing. Here, I find build videos far more valuable than product reviews, because if you see an experienced YouTuber struggling to mount a dang cooler, it should at least give you pause. I'd also note that build videos are inherently more entertaining than product reviews, because it's compelling to watch people struggle and overcome adversity, and even more fun when they do so in a humorous and good-natured way, which is a big part of the secret sauce of folks like Linus and PC Centric.
  • The noise level of any included fans when run at, say, 30%, 50%, 80% and 100% load. This might be idiosyncratic to me (though I suspect not), but I'm particularly sensitive to fan noise. Given that the cooler can fit in my case and can handle the output of my CPU, what matters to me is how noisy my whole system is going to be, both at idle and under load. With any cooler, I assume I'm going to have to tune the curves of both its fan(s) and my case fans to find the best balance of noise and cooling across different workloads (e.g idle, gaming load, full load). I can't possibly know how this will end up in my build in advance, and rigorous testing under laboratory conditions doesn't help me. So the best I can hope for from a reviewer is to give me a sense of how much noise the cooler's fans will contribute to overall system noise at various RPM levels. (This is the primary reason I favor Noctua fans and coolers and am willing to pay a premium for them: they are super quiet relative to virtually all competitors at either a given RPM or thermal dissipation level. And it's the primary advantage of switching to the D15 in my current setup, since the larger fans and dual tower design mean it can dissipate more heat with less noise than the U12S.)
Stated another way: If a cooler is rated for my CPU, fits in my case and can be mounted properly with a minimum of fuss, I assume it can adequately cool my PC at some noise level. The only question is how noisy, and that's a function of how fast I need to run my system's fans (including the cooler fan(s), but also every other fan) to achieve adequate cooling, and of the noise level of my fans (again, including, but not limited to, the cooler fan) when run at those speeds. No amount of laboratory testing can answer that question, however rigorous (unless it were conducted on a test bench identical to my system, which is unlikely).

I've been throwing the word 'rigorous' around, and it's worth decomposing what it means and why 'rigorous' testing is (or isn't) valuable to the consumer. One aspect of it is just that the test is conducted properly and free of human error (and that the rigor of the process makes it easy to identify when human error is committed and correct it). Another aspect is that the testing methodology itself is well-designed insofar as it provides accurate and useful information to the consumer. My main concern with 'rigorous' testing in many of these product categories (especially with CPU coolers) is that the rigorous, laboratory testing methodologies don't yield especially useful information that can be applied outside of laboratory consumer conditions.

Another aspect of rigor is repetition/replicability. Again, there are different dimensions to this. Certainly, a rigorous reviewer ought to conduct multiple trials of the same test to see if their results are consistent. But the thing is, this is more of a check on other aspects of the rigor of the individual reviewer's own methodology and work than anything else. If a review does, say, 50 trials (which is, realistically, way more than any of these outlets are doing for PC components) and finds that 5 trials are significant outliers, it suggests one of three things:

  1. The tester committed human error on the outlier test runs, in which case they should try to track it down and correct it, or else throw out the results of those five trials.
  2. The testing methodology fails to account for some confounding factor that was present in those five cases and not the others, in which case the reviewer ought to track that down and control for it if possible.
  3. The individual unit being tested (remember, these reviewers are typically testing only one unit of the product being evaluated) exhibits weird behavior. Technically, this is an instance of (2) because something must be causing the particular unit to behave oddly, it's just that the reviewer hasn't been able to control for that something. And given time constraints on day-one reviews especially, this is when an individual reviewer is most likely to say 'I don't know... maybe I have a defective unit here, but I can't be sure and don't have the time or resources to investigate further.'
So, again, an individual reviewer doing multiple trials is valuable, but primarily because it helps that reviewer identify problems with their own execution, methodology or the individual unit being tested. A consumer should have more confidence in the data from a reviewer who performs 'rigorous' testing in this sense, but only to the extent their methodology is basically sound and with the understanding that any one reviewer's results have limited value in extrapolating to how a different unit of that product will perform for you, even under otherwise identical conditions because.

The kind of rigor that does help a consumer to have confidence that the results will apply to them is the kind that comes from the repetition and replicability achieved by multiple reviewers reviewing different units of the same product using the same (basically) sound methodology. This is the kind of rigor that modern laboratory science provides (e.g. a scientist achieves a certain result, publishes his methodology and findings and then other scientists follow the same methodology with different people and materials and see if the result is replicated). It's also why it's important to consider multiple reviews from multiple reviewers when evaluating a product as a consumer. Consistent results across multiple reviewers makes it more likely that you will achieve a similar result if you buy the product. Inconsistent results suggest manufacturing variances, problems with quality control, design flaws that lead to inconsistent behavior across units, etc.

So what really would be of greatest value to consumers in the space isn't more elaborate methodology and more trials (i.e. more rigor) by any one reviewer. It's many reviewers following (basically) the same methodology where that methodology does the minimum necessary to produce meaningful results and can be easily replicated; and that the methodology incorporates the minimum number of trials by an individual reviewer to reasonably provide for error identification and correction by that individual reviewer. If folks are interested in raising the PC hardware review sector (and discourse), figuring out how to achieve that is what they should be striving for.

Tuesday, September 22, 2020

An overview of PC build performance limitations and their resolutions

This post is the follow up to the more philosophical discussion of why 'bottleneck' isn't a terribly helpful concept in understanding and fixing PC build issues. It intends to offer a more sound perspective that isn't fixated on the 'bottleneck' concept and is geared towards new builders.

This article isn't intended as a comprehensive guide to tuning or optimizing every aspect of a PC build. It's more of an overview to help new builders get oriented towards building for and optimizing the total play experience. I'll probably follow up with a final post in the series with more practical, step-by-step build development advice based on the framework I'm laying out here.

The total play experience

Focusing just on statistical performance optimization in building or optimizing a PC is, I think, too narrow a view. Instead, the goal should be to optimize the total play experience: this includes not only what is traditionally thought of a 'performance optimization' (i.e. trying to squeeze every single FPS out of a game), but also elements that are more about 'quality of life' and contribute less directly, if at all, to, e.g., a system's scores on a synthetic benchmark.

Broadly, the 'performance optimization' elements have to do with directly optimizing the system's rendering pipeline, so we'll start that topic with a brief discussion of what this is and the three main components that contribute to it -- CPU, GPU (or graphics card) and the monitor (or display) refresh rate.

But first, we'll start with the more indirect, 'quality of life' elements, because they're easier to understand and summarize.

Quality of life elements

There are many aspects of a system that improve the quality of the play experience. For instance, having a keyboard or mouse that is responsive and 'just feels right' may significantly improve a gamer's experience. The right desk or chair might greatly increase comfort. A reliable, low-latency Internet connection greatly improves the experience of online games. Not to take away from any of these factors, but at the highest level, I think there are three build elements that, when properly considered and set up, greatly contribute to quality of life in a modern gaming PC build: mass storage, RAM and monitor factors other than refresh rate (we'll discuss refresh rate in the performance optimization section).

Mass Storage

Mass storage refers to where your system stores large sets of files, such as your operating system, game downloads, documents and save game files. In a typical Windows system, mass storage devices are assigned drive letters like C, D, E and so on.

Physically, mass storage devices can either be mechanical Hard Disk Drives (HDDs), which store data using magnetic plates or solid state drives (SSDs), which store data using electrical signals in memory chips. HDDs are the older technology, are much cheaper per gigabyte of storage and tend to come in lager capacities (eg 2TB, 4TB and up). SSDs are newer, are more expensive per gigabyte and tend to come in smaller capacities (e.g. up to 2TB, though larger devices are starting to become available). Both HDDs and SSDs come in a variety of form factors. All HDDs and some SSDs need to be mounted to mounting point in your PC case and connected to your motherboard with one cable for data and to your system's power supply with a second cable for power. Some SSDs are cards that are installed directly into a slot on your motherboard and don't require any separate cables. Others are installed and connected just like hard drives.

SSDs are able to store and retrieve data significantly (dozens or hundreds of times) faster than HDDs. This means that if you install your operating system on an SSD, your PC will boot noticeably faster than it will from an HDD. And if you store your games and save game data on an SSD, level and save file load times will be faster.

In my experience of using computers over 30 years, switching to a system with an SSD had the single biggest impact on how 'snappy' I perceived my system to be as compared to any other innovation or evolution of hardware.

RAM

RAM (Random Access Memory) is very fast storage that your system uses for programs that are actively running. For most modern (circa 2020) games will run fine on a system with 8GB of available RAM, with 16GB providing reasonable room for future proofing.

Keep in mind that these are recommendations for available RAM. Your game you're playing isn't going to be the only thing using RAM on your system. The operating system itself has some overhead. And if, like most people, you have several browser tabs, chat programs and utilities running in the background while you game, each of those consumes RAM, leaving less than 100% of your total RAM available to the game.

Running out of RAM while gaming is not fun. It can lead to the game becoming unstable or crashing in the worst case scenario. Short of this, when a typical Windows system runs low on available memory, it attempts to free it up by writing some data from RAM to one of the system's (much slower) mass storage devices in a process known as paging. This process consumes processor and disk resources as it runs, which can end up stealing those resources from the game itself and leading to lag or hitching.

There are other considerations that go into fully optimizing a system's RAM beyond capacity. These include the speed at which the memory operates, the number of memory modules, the fine tuning of memory timings and more. Different systems are more or less sensitive to different aspects of memory performance/optimization: AMD Ryzen processors, for example, have been shown to benefit significantly from fast RAM.

Monitor Factors

First of all, I strongly encourage you to think of your monitor as integral component of your gaming system. It's the thing you're going to spend all your time looking at. If it doesn't deliver a good experience to you, it's going to be a bummer.

Refresh rate (discussed) later is only one factor affecting the quality of the play experience provided by your monitor. Other factors (by no means exhaustive) include
  • How accurately the monitor reproduces color
  • How black its blackest blacks are
  • How big it is physically
  • It's pixel resolution
  • What its pixel response time is (i.e. how fast pixels can change color)

Rendering Pipeline Performance

The elements of a gaming system most traditionally identified with its performance and specs make up its rendering pipeline. Before discussing the components -- CPU, GPU and monitor refresh rate -- let's talk about what the rendering pipeline is.

Frame rendering

As you probably know, moving video and game images are made up of frames: still images that get shown to us very rapidly, one after the other. If the images are presented rapidly enough, this creates the illusion of motion. In games, the rendering pipeline is the process by which these frames get generated in realtime and are presented to the user.

Performance of the rendering pipeline on a given system is traditionally expressed in Frames Per SecondFPS or framerate. As you know from your experience as a player, a game's framerate can vary from moment to moment based on how demanding the rendering work is at that moment. If a game isn't able to consistently generate enough FPS for a given user to perceive continuous motion, the play experience is compromised by stutter, lag, hitching, etc. The more FPS the system can present to the user, the smoother the play experience will seem (up to certain limits).

In order to ultimately 'deliver' a frame to the player, each component of the rending pipeline must perform a specific function before 'shipping' the frame on to the next component. For example, the CPU must finish its work on a given frame before the GPU (the next component in the pipeline) can begin its work on it. The GPU needs the results of the CPU's work in order to do its job (this is a slight oversimplification but its true enough for our purposes).

For a user to play a game at a steady 60 FPS, the system must present him with a new frame every 16.67 milliseconds. If the CPU takes so long to do its work before handing the frame off to the GPU that the entire rendering process can't complete within that 16.67 milliseconds, the frame rate with go down. This is an example of the proper understanding of the concept 'bottleneck.' If the frames take long enough to generate, the user will experience stutter, hitching and the like.

CPU

The Central Processing Unit or CPU is the general-purpose computing 'engine' of your system. In the frame rendering cycle, the CPU is responsible for two main things. The first is computing the state of the game in every frame. You can think of the game's state as the complete set of information about every element of the game and every object in the game world.

Computing the game's state requires the CPU to do many things, for instance processing control inputs, updating the game's physics simulation and processing the behavioral instructions of enemy AI. The more complex the game, the more the CPU has to do each frame. A strategy game with hundreds of AI units on the screen at once will require more CPU resources to update its state than a simple 2D platformer.

Once the game's state has been calculated, the CPU uses that data to create the instructions that the GPU will use in the next step of the frame rendering cycle. These instructions are known as draw calls. The more visually complex the frame is, the more draw calls will be required for the GPU to render it and, therefore, the more draw calls the CPU will first need to make. Factors like how high the game's resolution is, texture quality, number of light sources in the scene and whether any post-processing effects (e.g. fog or blur) are being applied affect the number of draw calls.

As far as performance goes, one CPU can be superior to another in its ability to run more instructions per second. When we speak of one CPU being 'faster' than another, this is essentially what we're referring to. A CPUs clock rate, usually expressed in megahertz or gigahertz (e.g. 4.5Ghz) is a measurement of how many instruction cycles per second the CPU can perform, with higher numbers indicate higher performance. All things being equal, a faster CPU will be able to drive more FPS than a slower one.

Modern CPUs typically contain more than one CPU core. Each core is capable of running instructions independently of the other cores and at the same time, allowing the CPU on whole to do more things at once. Historically -- and still in 2020 -- most games are not coded in a way that takes advantage of lots of processing cores, so games generally benefit more from faster CPU clocks than they do from more cores. This is starting to change though. With the availability of inexpensive, higher core count consumer CPUs like AMD Ryzen and low-level APIs like DirectX 12 and Vulkan that let developers more easily create games that take advantage of and automatically scale to higher core counts, expect more and more games to optimize for multicore performance over pure clock rates in the years to come.

GPU

The GPU is an integrated set of components, including dedicated processors and (usually) memory, that can create visual images very quickly. Because their components are highly optimized for this task, GPUs can create these images much more quickly than general-purpose computing hardware (like a regular CPU) can, which is a necessity for high-FPS gaming: the modern CPUs and system RAM can't both process the game state and render the visuals quickly enough, so the workload gets split between the CPU and GPU.

GPUs use raw visual assets (such as the textures used to create object's appearances) and follow the instructions contained in the draw calls to produce the frame images that the player eventually sees. A draw call might (effectively) contain an instruction that says 'apply this texture to the surface of a triangle with it vertices at the following screen coordinates...' The GPU is the system component that actually executes these instructions and ultimately determines what color each pixel in the frame should be based on that frames total set of draw call instructions.

As GPUs evolve over time, manufacturers increase their capabilities by making their processing units faster, adding more processing units, adding more and faster memory and more. More powerful GPUs are able to process more FPS than less powerful ones. In some cases, new GPUs support entirely new capabilities. Nvidia's RTX GPUs for example, have specialized processing cores that allow them to accurately simulate how light travels and interacts with different surfaces, creating much more realistic lighting effects in a process known as realtime ray tracing (in games where the developer supports it).

If a CPU is consistently able to hand the GPU an acceptable number of FPS but the GPU is unable to do its work quickly enough, then the GPU may be bottlenecking the system (in the legitimate sense of that term).

Monitor Refresh Rate

Once the GPU has finished rendering a frame, it sends that frame to the monitor (or other display device like a TV), which then displays it. Monitors are capable of updating themselves a certain number of times per second. This is known as the monitor's maximum refresh rate, or simply refresh rate, and is measured in cycles per second, or hertz (hz). A 60hz monitor can fully replace the image on the screen 60 times per second.

Like other parts of the rendering pipeline, fully refreshing the display takes some (tiny) finite amount of time: it doesn't happen instantaneously. Modern monitors update themselves by replacing the contents of the screen one line (or row) at a time from top to bottom. On a 1920 by 1080 monitor, there are 1080 rows of pixels on the screen. During each update cycle, the first line is updated, then the second, then the third, and so on. After the 1080-th line is updated, the process starts over again for the next frame.

As mentioned earlier, a monitor with a higher refresh rate (paired with other system components capable of driving it) will result in a gaming experience that feels smoother and more immediate, within the limits of what the player can appreciate. Some people can actually perceive the flicker between frames of a 60hz monitor. Others can't. Most gamers will perceive an improvement in smoothness when moving from a 60hz to a 120 or 144hz display. Elite gamers like esports athletes can benefit from 240hz and even 360hz displays, at least under certain circumstances. At an extreme, it's almost certain that no human being would benefit from a hypothetical 1000hz monitor as opposed to a 500hz one.

If your GPU is not capable of driving more FPS than your monitor's refresh rate, you are leaving performance -- in the form of potential smoothness and immediacy -- on the table, assuming you can personally perceive the difference between the current and potentially higher refresh rates. For me -- a 40+ year-old person (whose vision and reflexes have therefore started to deteriorate with age) who doesn't play a lot of twitch-heavy game titles, I can appreciate the difference between 60 and 144hz in the titles I like to play. But I can't perceive any difference between 144 and 240hz or higher. A younger, elite player of twitch-heavy games might have a different experience, but a system capable of doing 240hz (as opposed to 144) would be wasted on me.

The opposite situation can also be true: your GPU may be capable of providing more FPS than your monitor's refresh rate is capable of displaying. In practice, this usually results in one of two conditions:

  1. If you do nothing else, the system will continue to deliver frames to the monitor as it generates them. This means that the monitor may receive more than one frame per refresh cycle. At whatever point during the refresh cycle the new frame is received, the monitor will pick up refreshing the next display line based on the newer data in the second frame. This results in what players experience as tearing: a visible line on the screen that represents the border between where the monitor drew the frame based on the older vs. the newer frame data.
  2. The GPU/game setting called Vsync forces the GPU to synchronize its refresh rate with that of the monitor. Since a standard monitor is a 'dumb' device in this respect, this is accomplished by the GPU artificially limiting the framerate it outputs to coincide with monitor's refresh rate. This means that even though your GPU and CPU might be able to output 300 FPS on a given game, Vsync will limit that output to 144hz if that's all your monitor will support. This eliminates tearing but leaves frames on the table.
There are also what are known as adaptive refresh rate technologies like GSync and Freesync which make the monitors that support them smarter. These monitors are able to communicate with the GPU and can synchronize their refresh cycles precisely at the arbitrary FPS value the GPU is outputting at a given moment (up to the monitor's maximum refresh rate).

If you are in this scenario (and assuming you don't have an adaptive display), which side of the lower FPS / no tearing vs. higher FPS / with tearing line you fall on is a matter of personal preference.

As noted elsewhere, there are verifiable benefits -- especially for competitive gamers -- to running at the highest FPS possible, even if your monitor's refresh rate is lower. A full discussion of this issue is beyond the scope of this post, but I do want to acknowledge it. But I consider it a specialized issue relevant to certain gaming scenarios, not one of general build advice.

Putting in All Together

This table summarizes the six components of a build that most commonly impact quality of life and rendering performance, along with how the user will perceive a component with sub-optimal performance.


‘Quality of Life’ Components

System Component

Description

What you’ll experience if this component is limiting system performance

Mass Storage

Provides long-term storage for lots of files like your operating system, game files and savegame data

A slow mass storage device (such a traditional HDD) will make loading games and saving/loading savegame seem slow.

RAM

Provides super-fast working memory for running programs (including games)

If the system doesn’t have enough available memory for all running programs, games may hitch, lag or slow down as the operating system pages to use slower mass storage to hold data that would otherwise be in RAM.

Monitor factors (not refresh rate)

Affect the perceived quality of the visuals displayed

·         Inaccurate color reproduction

·         Harsh or jagged edges around objects

·         Motion trails

·         Visual artifacts

·         Overly bright or dim images (even after monitor adjustment)

Rendering Pipeline Components

System Component

Function

What you’ll experience if this component’s performance is being limited by the preceding one

What you’ll experience if this component gates the performance of the preceding one

CPU (or processor)

Provides general-purpose computational power to the system. In games, does computational work to update the game’s state and to create the draw call instructions the GPU will use.

N/A

·         N/A

GPU (or graphics card)

Executes draw call instructions to generate the frame images that will be presented to the user

·         Lower FPS

·         Lag and stutter

·         Lower FPS

·         Lag and stutter

·         Missing out on certain visual effects / visual quality as you lower graphics settings to compensate

Monitor refresh rate

Refers to how many times per second the monitor can update every pixel on the screen

Visuals that seem less smooth than they might otherwise be (difference may not be perceptible to all users)

·         With Vsync disabled, frame tearing

·         With Vsync enabled, lower FPS than you would otherwise achieve


Wednesday, September 16, 2020

Towards a better understanding of 'bottlenecks' in PC building

In PC Building enthusiast communities, the subject of PC builds being 'bottlenecked' comes up quite frequently. It comes up a lot around the time new hardware -- like the recent RTX 3000 series -- is introduced. Or when (often novice) users are looking for build advice (e.g. 'Will a Ryzen 5 3600 bottleneck my build?').

I recently made a long post about how the concept of 'bottleneck' is frequently misused, poorly understood by and generally unhelpful for PC builders on r/buildapc. The original post generated over 1,000 karma and lots of polarizing discussion and criticism. It's still available here and is worth a read if you're interested in the topic, though for reasons that are not clear to me it was removed by the moderators.

Since the original post, I've considered some of the criticisms and have been chewing on the issues more. This post represents me working through some of this in written form.

How is the term 'bottleneck' generally understood?

When people use this term, what do they mean by it, both in general and specifically in the context of PC building? Among inexperienced PC builders, there is a lot confusion and fuzziness, but that is to be expected whenever novices engage with a concept in a new domain. I'll come back to this issue later, but to start I want to focus on the more sophisticated understanding of the term that more experienced folks, often with engineering backgrounds, have.

The sense most experienced folks understand the term, which is nicely encapsulated by this Quora answer, I'm going to call the Informal Engineering Version of the concept. I define it as follows:
bottleneck (Informal Engineering Version): noun. A component of a system, the performance limitations of which limit the overall performance of the system.

This definition struck me as fine in an informal sense, but left me feeling vaguely uneasy. It took me a lot of thinking to get at precisely why, but I think it amounts to two defects of the definition: a major and a minor one.

What's the alternative?

To get at the major one, it helps to ask what the alternative would be to a system where the performance is limited by the performance of a single component. I think there are two.

The first would be a system where performance was unlimited. But this is, of course, impossible. Every system, just like every thing, has some specific nature, including specific limitations. In the realm of PC building, there is obviously no such thing as a PC of unlimited performance.

The second alternative would be that the system performance is limited by the performance of more than one component. There's a sense in which this could be true: in PC building it would be the theoretical 'perfectly balanced system.' And targeting a balanced build is good advice insofar as it goes. For instance, for a given build budget, and all other things being equal, it makes sense to spend it in a 'balanced' way, rather than under-investing in certain components and over-investing in others.

The 'every system has a bottleneck' school

But in practice, it's not possible to achieve the Platonic ideal of a balanced build. In PC builds, and indeed in most systems, there will almost always be some single factor that imposes an upper limit on system performance. The proponents of the Informal Engineering concept of 'bottleneck' in the PC building community often espouse this view, with their mantra being 'Your system will always have a bottleneck.' For instance, they'll say, if your weak GPU is currently gating your build, as soon as you upgrade it, your previously second-weakest component (let's say your monitor with its low refresh rate) will become the new bottleneck.

It's worth examining what this actually amounts to. Because all systems, practically speaking, have some single weakest component, all systems are perpetually bottlenecked. But all this means is that all systems have some limitation on their performance, which is to say that all systems have some definite identity. It reduces the concept of 'bottleneck' to meaning nothing more than a statement of the law of identity; that a system can do what it does and can't do what it can't. Am I 'bottlenecked' in my inability to fly because I don't have wings? Or in my inability to still have my cake after I eat it because the universe doesn't allow for that?

This is why I say this conception of bottleneck is useless. Every system is equally 'bottlenecked.' Your brand new Core i9 10000k series, 256 GB RAM, RTX 3090 and 360hz display system is bottlenecked because it can only output as many FPS as its weakest component (whichever that is) allows it to.

The same system with, e.g. a GTX 700 series card instead would be less performant -- the neck of the bottle at the GPU would be narrower, so to speak. Proponents of the Informal Engineering definition would say that the latter system is more bottlenecked than the former. But I think this view is off. It's like saying that a corpse is 'more dead' than a living person. No it isn't. The living person isn't dead at all.

This wrong conception is common among PC building novices and is reinforced by veteran builders of the 'every system has a bottleneck' variety. Many of the new builder questions on, e.g., PC building Reddit communities ask things like 'Will this graphics card be a bottleneck in my build?' The invariable response from this is crowd is, of course, that every system has a bottleneck. Maybe it's the graphics card right now. But if the graphics card were upgraded, the system would be bottlenecked by some other component. What is the novice builder supposed to do with this perspective? Throw up his hands and resign himself to a system that will forever be hopelessly bottlenecked in one way or another, his performance aspirations always frustrated?

No, not every system is bottlenecked

The way out of this dilemma is to identify that something is a bottleneck only if it, in fact, imposes a significant limitation on the overall performance of the system. This leads to what I'll call the Interim Engineering Version of the concept, which is close to the first definition on the 'bottleneck' Wikipedia page:

bottleneck (Interim Engineering Version): noun. a component of a system, the performance limitations of which impose a significant limit on the overall performance of the system.

On this improved conception, whether something is a bottleneck or not hinges on whether the performance limitation is significant. And what counts as significant is highly dependent on the context of use. If a given component's limitations don't impose a significant limitation on overall system performance in the context in question, than that component is not a bottleneck, even if it happens to be the single component that is limiting overall system performance. Moreover, if the system's overall performance is adequate to its purpose, then the system as a whole is not bottlenecked.

In PC gaming terms, within the context of playing CS Go at 1080p (i.e. 1920 by 1080, 60hz), the following systems are equally not bottlnecked:

  1. Core i9 10900k, RTX 3090, 360hz monitor
  2. Core i9 10900k, GTX 1060, 360hz monitor
  3. Core i9 10900k, GTX 3090, 60hz monitor
  4. Core i5 6500, GTX 1070, 60hz monitor
All of these systems will deliver an acceptable play experience of at least 60fps at the target resolution and high graphics settings. Systems 1 and 4 represent 'balanced' builds. (1) is vastly more performant than (4), but both are not bottlenecked with respect to this task, and neither contains a single component that is markedly weaker than the others. Systems 2 and 3 each have an obvious component that is limiting the overall system performance (the GPU and monitor, respectively), but both will still be adequate to the task. Their limitations are not significant in this context.

In PC building, there is no value, in and of itself, in achieving the Platonic ideal of a system where every component fully saturates the next component downstream at every step of the chain. It doesn't necessarily follow that the 'unbottlenecked' system will outperform a bottlenecked one. System 2 from the list above will offer a better experience than the perfectly balanced, Platonic ideal of a system from say, 10 years ago, in spite of the GPU being a 'bottleneck' because all of the components are better than the best components you could purchase 10 years ago, including the 'bottlenecking' GPU.

Component x does not bottleneck component y

Another subtlety here is that while an individual component may 'be a bottleneck' in a given system, 'being bottlenecked' (or not) is a property of the entire system, not of a component. In other words, it is fine, in principle, to say 'My CPU is a bottleneck' or 'My CPU is bottlenecking my system.' However, the common PC building forum question (and responses to it) of, e.g., 'Will this CPU bottleneck this GPU?' is invalid.

As noted above, it will always be the case, in practice, that some component of a system is not fully saturated by another component. The fact alone that your CPU is not capable of outputting as many FPS as your GPU is capable of processing doesn't tell us anything of practical utility in evaluating your build or whether or not it's 'bottlenecked.' That cannot be assessed without reference to the overall performance of the system against its intended purpose. Even if the CPU is capable of saturating only 25% of the GPU's maximum capacity, if your goal is to play Control at 8K / 60 FPS, then as long as the CPU can consistently deliver 60 FPS to the GPU, the system is not bottlenecked.

More deeply, even when one component really is bottlenecking the overall system, that's the perspective to take on it. By analogy, if your shoes are too small, it's correct to say that they, e.g., limit your ability to walk. It would be weird, on the other hand, to say they limit your feet's ability to walk. Walking is an activity of a person, not of feet, even though it involves feet. Likewise, performance (or lack thereof) against a purpose is an attribute of a system, not of any one of the system's components.

This usage is, by the way, perfectly consistent with how the term is used in practice in engineering contexts. No one regards a system as bottlenecked if its overall performance is adequate to the needs (or anticipated needs) it is meant to serve. When a system is inadequate, it is often good methodology to search for bottlenecks and to fix any ones that are identified. And it would obviously be poor methodology to, e.g., increase the diameter of the base of the bottle while ignoring the diameter of the neck. But once performance is rendered adequate (or adequate to address expected future needs), the hunt usually stops. Engineers generally don't waste time quixotically tilting at 'bottleneck' windmills if the overall performance of the system is acceptable to current and anticipated future needs.

As a side note, this is where the term 'bottleneck' as the source of the analogy is unfortunate, because in actual bottles, the narrowness of the neck is a feature not a bug. It improves the performance of the overall system relative to its purposes. A bottle without a neck is a jar. Bottles offers numerous advantages over jars for the applications we use bottles for. It's cheaper to seal, for example, because the sealing component (e.g. a cork or metal cap) can be smaller, and, historically, cork and metal were expensive materials. Most crucially: the fact that the narrow neck reduces the flow rate makes it easier to pour out of the bottle in a standardized and controlled way.

Other kinds of performance limitations

The minor flaw of the standard definition of bottleneck is the tendency to make it overly broad. Even the Interim Version of the definition suffers from this problem. It is important to recognize that bottlenecks are not the only type of performance-limiting condition of a system, or even of a PC build.

Because PCs -- more than many other kinds of systems -- are inherently modular, with different modules contributing to performance in different ways, there is a tendency to regard any sub-optimally performing component as a bottleneck. But consider some examples of PC build issues:
  1. A CPU that is incapable of delivering enough FPS to the GPU for a given game, leading to perceptible hitching and slow down;
  2. A GPU that is incapable of driving enough frames to saturate a monitor's refresh rate for a given game;
  3. A power supply that is not capable of supplying enough wattage for a given build;
  4. A GPU that does not support realtime ray tracing, meaning that feature is not available for a given game that supports it;
  5. A power supply that is capable of supplying enough wattage for a given build but is failing, delivering inconsistent power output;
  6. A front panel power button with a faulty contact, meaning the PC will not boot when the button is pressed.
Each one of these examples involves some specific component of a PC build not performing as expected (or at all), where that lack of performance impacts the performance of the entire system. I take (6) to be an unambiguous example of something that is not a bottleneck, and I don't expect many people would regard it as one. It's an issue that impacts performance (indeed, this system won't perform at all) and it's isolated to one component, but it isn't a bottleneck. If you think about the corrective pathway, it doesn't involve increasing the capacity of the limiting component: it just involves fixing or replacing it. The issue also doesn't manifest to the user as any sort of delay or slow down in terms of anything 'moving through' the system. To call the faulty power button a bottleneck would be, I think, to torture the term 'bottleneck.'

I consider (5) an exactly parallel example to (6). In this case, the power supply has the capacity to power the system, it's just faulty. This would likely manifest to the user as system instability (e.g. random reboots). Likewise, the corrective pathway doesn't involve increasing the capacity of the power supply (e.g. moving from a 500 to a 600 watt PSU), it just involves replacing the faulty PSU with a working one. Interestingly, however, in the comment thread on the original post, a redditor asserted that this example was not only a bottleneck but an 'obvious' one. Even more interestingly, another user commented on the same thread that no one could possibly consider this to be an example of a bottleneck and that I was criticizing a straw man. That's doubly amusing because the 'straw man' was put forth as an actual argument in the thread he was responding to. Again, I think this is a torturous use of the term bottleneck. A failing or defective component is an example of a system performance issue distinct from a bottleneck.

Likewise, I don't think it's plausible to argue that (4) is a bottleneck. An inability to do realtime ray tracing may indeed result in a sub-optimal play experience, but it seems misguided to say the GPU's lack of ray tracing support 'bottlenecks' the system's performance. Lack of feature support is a distinct type of system limitation, not a type of bottleneck.

(3) is the first example where it becomes plausible to call something a bottleneck, and indeed the first place where I think most people would start applying the term (e.g. 'The PSU's inadequate wattage is bottlenecking the system.') I certainly don't think this is a ridiculous position to take, but I'm going to argue that it isn't a bottleneck. Again, there's no question that the PSU's inadequate wattage is limiting system performance. There's also no question that the performance limitation is one related to capacity: if the PSU could deliver more watts, the performance limitation would be removed. However, as in (5), the limitation would manifest as system instability.

On the literal bottleneck analogy, I think this is more like, say, the glass of the bottle being slightly porous and causing leaks than it is to the neck of the bottle being too narrow to provide adequate flow. Though the porousness of the bottle and the maximum wattage of the PSU are both capacities of their respective components that limit performance of the overall systems, they are capacity limitations of a different kind than those that can lead to bottlenecks. Stated another way: not (even) every limitation in the capacity of  a component that significantly impacts the performance of the system overall is a bottleneck.

(2) Is another example of something that would traditionally be referred to as a bottleneck, and I would go as far as to say it is one that most PC builders would argue is an unambiguous one. I don't think it's quite so unambiguous. The first thing that gives me pause is that we should observe that this condition (GPU delivering less FPS than the monitor's refresh rate) is incredibly common, even among very high-end gaming systems. In fact, it is a potentially desirable state of a high-end gaming system. A builder with a large budget, for example, might purchase the highest refresh rate monitor available (e.g. 360hz) knowing full well that his (also very high-end) GPU is not capable of fully saturating it all the time on all the titles he plays. And it would be perfectly rational for him to do so. Given that the 360hz monitor is (at the time of this writing) the highest-refresh-rate device he can purchase, it makes sense to have the headroom at his disposal. But to say the GPU is bottlenecking if it isn't constantly driving 360 FPS on every single title would be to drop a ton of context about how games work: notably that FPS are variable and that performance will differ from game-to-game and moment-to-moment.

As a side note, another important element here is the market context. At the time of this writing, the most powerful consumer GPU yet announced is an RTX 3090. Though independent benchmarks have not been released, it is clear that even that card is not capable of fully saturating a 360hz display at every reasonable consumer resolution and combination of game settings. So if someone is going to assert that a 3090 is a 'bottleneck' in a given situation, the obvious response is: in comparison to what? That is: in comparison to what possible alternative that would alleviate the 'bottleneck?' As of now, the universe (more specifically the portions of it controlled by Nvidia and AMD) does not provide one. As noted earlier, this is like considering the nature of reality a 'bottleneck' to having your cake and eating it, too.

More deeply, the situation has to be evaluated with reference to the whether the performance impact on the system overall is significant with reference to its intended purpose. The fact is that most people, can't perceive the difference between 120hz and 240hz, let alone 240hz and 360hz. This includes even most gamers, who we would expect to better appreciate the difference than the general population. Perhaps some elite esports athlete would benefit from consistently driving 360hz as opposed to achieving a variable framerate between, say, 250 and 310hz, but for the average gamer, the performance difference is not significant. (I realize there are other reasons why it is desirable for a GPU to drive a higher framerate than a display can refresh at, but I'm ignoring them for the purposes of this example).

Example (1) is, in my opinion, a clear an uncontroversial example of a bottleneck, properly understood. Here, a component (the CPU) is limited in a way that significantly impacts the performance of the entire system. This impact is significant because it is clearly perceptible by the player in the form of an undesirable consequence: noticeable lag and stuttering.

Like example (3), the limitation of the CPU is one of capacity. But it is a specific type of capacity limitation: one in which the capacity limitation has to do with (by analogy) flow through the system. The rate at which the CPU can deliver frames to the GPU causes the GPU to have to wait long enough that the delay results in a play experience that is not smooth. In other words, bottlenecks involve a limit in the throughput of a component limiting the performance of the entire system. Finally, this yields the proper definition of bottleneck, which I'll call the the Rigorous Engineering Version of the concept. It is the one articulated in the second paragraph of the Wikipedia entry:

bottleneck (Rigorous Engineering Version): noun. A component of a system, the throughput limitations of which impose a significant limit on the overall performance of the system.

'Bottleneck,' properly understood in this way and restricted to this usage is a valid concept and is applicable to certain types of PC build situations, as in (1).

So much for the theoretical discussion. In a future post, I'll take on the practical implications for PC building and PC building advice and, in particular, the questions PC builders should be asking (and answering) instead of the various flavors of 'Will component x bottleneck my system?'

Friday, May 1, 2020

COVID-19: The Story So Far


From an overlong Facebook post.

The story so far:

For years, experts had been warning us to prepare for a pandemic respitory disease. We didn’t. Government in particular did not adequately do so, nor did the healthcare industry, since there was no economic incentive for them to do so because healthcare pricing in this country is controlled by the federal government, particularly in the hospital sector where the weight of such a pandemic would fall, which did not incentivize preparedness.

Then a pandemic hit and we squandered many chances to respond promptly (to the extent we could have being unprepared) because the present occupants of the White House and the state houses didn’t heed the early indicators. The former, in particular, attempted his typical routine of trying to create an alternate reality in which the virus went away on its own. The virus didn’t get the memo.

The inadequacy of the preparedness and initial response resulted in tens of thousands of avoidable deaths.

When governments finally started responding, the initial step was to initiate lockdowns of the population to slow the spread of the virus. We were told this was to ‘flatten the curve’: meaning to avoid a surge in people getting the virus all at once if it spread uncontrolled that would exceed the surge capacity of the hospital system. If that surge capacity gets overwhelmed at any point, we were told, more people than necessary will die. It will not reduce the overall number of people infected or needing hospitalization, though: it just spaces them out over time. The area under the curve remains the same (aside from the excess deaths due to overwhelming hospital capacity), it’s just flatter.

We were told this step — and the lives jeopardized and reduced in quality as a result of it — were necessary sacrifices. But this phase was only temporary as we built testing and treatment capacity. Testing capacity lets us more selectively isolate people with the virus and those who may have come in contact with them, allowing healthy people to resume their normal lives. And increasing treatment capacity lets us treat more people at once, reducing the need to flatten the curve.

Now, almost two months later, we’ve made some progress. As a country, we’ve gone from doing only a few thousand tests a day to over 200,000. It’s harder to get numbers about increased hospital capacity, though the number of people in ICUs has come down significantly from the peak a few weeks ago (over 15,000 to around 9,000 on a given day), suggesting the curve flattening is working but also that, in many but by no means all places, there is not, at present, a need for as much curve flattening because excess hospital capacity exists. And though 200,000 tests per day is impressive, it’s still an order of magnitude short of what the experts say we need.

And still 95%+ of the population remains under lockdown. Neither state nor federal government have articulated sufficiently detailed plans for getting the treatment or testing capacity we need to end them.

As a result, the de facto plan is to keep the entire population under indefinite house arrest (without any actual crime, trial or indeed any legal basis) as a form of preemptively rationing access to (allegedly) scarce healthcare resources. We are now told this will continue until the rate of infections or hospitalizations declines significantly, which was never something that curve flattening was supposed to achieve. And the de facto plan for dealing with the massive economic consequences of this is socialism: both rationing access to other necessary resources to deal with disruptions to the supply chain the economic devastation is causing, and engaging in massive, hastily assembled wealth redistribution plans, ignoring the fact that if production isn’t occurring than the wealth to be redistributed isn’t being created.

And, perhaps most shockingly, almost all ‘respectable people’ are willing to tolerate it. They are fine with the government pointing a gun at them and their neighbors and saying ‘Don’t leave your house. Don’t run your business. Don’t have your kids learn. Don’t pursue any value other than the ones inside your four walls. Unless you’re a healthcare or ‘essential’ worker, in which case you are expected to put yourself at risk for the Common Good. Do this indefinitely because we’re in charge and, really, this is all we can do. We can point guns, tell people what not to do and shuffle wealth other people created around. We can’t adequately increase testing capacity. We can’t even have an actual plan or strategy for increasing and managing the treatment capacity we have. We certainly can’t invent a vaccine. So stay inside, because we have calculated (correctly) that you, the voting public, will tolerate lives being destroyed, including your own, if it happens slowly enough, in private, and in the name of the common good; but not if it happens rapidly and all at once with people hooked up to ventilators.’

Apparently, they are absolutely right in that calculation. Most people apparently don’t care whether they live. Not in a meaningful sense. If they’re motivated at all, it’s to avoid death. But living is not avoiding death. They may succeed in that. But it isn’t living. When it comes to actual living, we’ve done more in the last two months to dig our own graves and climb into them than any other generation of Americans.

Thursday, November 8, 2018

A Brief History of Recent American Politics and Why It's a Huge Mistake To Go 'All In' on Either Major Party

For most of the 20th and 21st Centuries, the Democratic Party has been driven by two consistent ideological themes: socialism and secularism. Socialism is bad. Secularism is good. Being a Democratic politician during that period is basically an exercise in how overt you can be with your socialism and secularism and still maintain power, because the Democratic Party is always more socialist and secular than the country as a whole.

Over the same time period, and particularly since 1980, the Republican Party has lacked a similar unifying ideology and has essentially been a coalition party for those who believe they stand to lose if there's more secularism or socialism. For most of that period, this coalition included
  • Wealthy people
  • Business interests
  • Evangelical Christians (especially since 1980)
  • Conservatives (usually middle/upper class and white) with vested interests in traditional values and social structures
  • Principled free market, limited government supporters
Again, there's nothing essential that unifies those groups under the same banner. They're a coalition opposing socialism and/or secularism. 

Opposing socialism is good, and to the degree that one is motivated by that (e.g. the free market supporters and the overwhelming majority of wealthy people/business interests who acquired their wealth legitimately), it's good.

To the degree that one acquired one's wealth and power illegitimately, most commonly as the result of some government privilege that has been bestowed upon you that actually is a form of statist/socialist/fascist cronyism (a minority of wealthy people and businesses), it's bad and gives the legitimate people a bad name.

To the degree that one is motivated by opposition to secularism (evangelicals, conservatives), it's bad.

To the extent that one was in that coalition for good reasons, being in the coalition with people in it for bad reasons undermined your good positions, even if being in the coalition was a necessary evil.

Since at least the Clinton Administration, it started to become clear that demographic changes in the electorate were going to make it more and more difficult for the Republican coalition to gain and maintain power. The electorate is becoming younger and more diverse, which means more socialist and more secular.

Seeing this, the Republican Party adopted a well-documented and successful effort to achieve and exploit structural advantages that would allow them to maintain power and further their political objectives in spite of the changing electorate. This included working to gain control of state legislatures and securing state and federal judicial appointments. Doing this enabled Republicans to not only better enact their policy goals, but also to stack the deck in their favor in the face of changing demographics through things like partisan gerrymandering and voting requirements that made things more difficult for likely Democratic voters. Essentially, the Republicans adopted and continue to be on a project to establish and maintain long term minority rule.

In the midst of this, another group suddenly became ascendant as a political force: older, lower income white people who previously leaned Democratic but now increasingly felt like their interests were being threatened by the same demographic trends the establishment Republicans were threatened by. Crucially, these people tended to be culturally conservative (and therefore anti-secularist) but economically socialist- (or at least statist-) leaning, favoring protectionist economic measures and social programs that they believed benefited them.

These folks found a voice in Donald Trump and his populism, leading to a question of how the existing Republican coalition was going to deal with this emerging faction. In some ways, the Trump faction was an odd fit for the traditional coalition: unlike the coalition's base, it was blue collar and statist economically. And stylistically it was more populist and, particularly in Trump, vulgar than the traditional base. Also in the person of Trump, it stood in sharp contrast to the values of the evangelical faction in particular. At the same time, it was well-aligned with the traditional base in cultural attitudes, ethnic composition and needing to exploit the same structural advantages to maintain political power in the face of changing demographics.

In the end, the established coalition ended up embracing the Trump voters, but in a sort of bargain with the devil. In exchange for more voters and a commitment to take up common cause in advancing the long term minority rule agenda, the established coalition became beholden to Trump and his base. This is most obvious in stylistic and cultural ways: Trump (and to a lesser extent his base) are more vulgar, populist, nationalist (and white nationalist) and amoral than the established coalition would prefer (or at least would prefer to be perceived as).

But there's a less obvious thing the established coalition had to abandon in the bargain: the last remaining connections (and pretenses thereof) to free markets, limited government and genuine capitalism (as opposed to 'crony capitalism'). The Trump approach includes huge elements of protectionism and bestowing economic favors on preferred constituencies. It included support for leftist positions on issues like healthcare (including support for the essential features of Obamacare as long as you do so while attacking the 'Obama' part of it). It also included adopting a more overtly authoritarian tone and approach to government, which Trump personifies in an absurd sort of way. To be sure, elements of some of these things were always present in the Republican coalition, in which good views on economic issues were always in the minority and good views on cultural issues were even less present. But the good ideological bits of the Republican coalition (the better economic stuff) have now been rendered inert and replaced by protectionist, nationalist cronyism. And the sneaky, patrician, slow-burn approach to achieving minority rule is becoming ever more overt, authoritarian and rapid.

The Democratic party, given its underlying principled commitment to socialism, was never a great home for people who cared about economic and political freedom (it was, and still is, a better home in some respects for people concerned with certain political freedoms related to personal values, identity, autonomy and choice). So historically it was understandable that people concerned with freedom (especially on economic issues) gravitated to the Republican coalition. There really was no other choice if you wanted your views represented by people with actual influence in government. Similarly, the Republican party has never valued diversity, so someone strongly motivated by a concern for that value might understandably gravitate towards the Democrats.

But following the Trump takeover, both of the major parties' core economic and political approaches are hostile to freedom. Both parties have some isolated pockets where they are better on certain cultural issues, the Democrats more so than the Republicans at present, but neither is consistently good. In their essential features, neither party is currently a home for people who put a high value on freedom, in particular those who understand that political, economic and personal freedom aren't distinct things but are all manifestations of the same fundamental human need to live according to one's own choices and values, rather than under coercion.

The other thing that happened during this time period was people started treating political identity like sports team fandom. Rather than seeing political affiliation as a minor element of identity or a tactical choice, people decided that identification with and finding a home in a political tribe was very important.

Various factors contributed to this. The fact that our political system is a two-party one, including structural factors that confer official power on the two dominant parties in ways that are not appropriate to what ought to be private clubs, serves as a backdrop for this. It is hard to influence politics outside the two parties. But against that backdrop, lacking an actual underlying unifying theme, the Republicans could only really find common identity in one thing: opposing Democrats. 'Being opposed to Democrats' became what it meant to be a partisan Republican, which naturally perpetuated an 'us and them' mentality. Since Democrats believe in the righteousness of their secular/socialist core ideology, it became equally natural to cast anyone who didn't embrace it as an enemy. This 'identity politics' serves to drive people who might find common cause on particular issues or even more granular principles into adopting one party identity or the other, and increasingly to the inflexible, tribal extremes of those identities.

But it's important to step back and remember that party affiliation does not have to be part of one's core identity. Closely identifying with a party may be required for a politician, but it isn't for the rest of us. And that is a benefit to us non-politicians, because neither party is wholly or even largely good. Neither represents a consistent, logical and necessary grouping of principles or positions. Does allowing people to marry someone of the same sex if they wish require a single-payer healthcare system? Does a strict adherence to Christian doctrine entail strict border enforcement? Does a belief that native-born Americans deserve special privileges entail laissez faire capitalism? Do some of these even represent coherent packages of viewpoints or are they hopelessly contradictory?

Working with or within the present political parties may be a useful tactic in achieving one's long term political goals, but doing so does not have to involve finding a 'home' there or buying into the abhorrent positions or contradictions doing so requires.

In particular, it is a mistake to go 'all in' on a partisan political identity in this way if one's primary motivation is to oppose the other guys. Even if one (correctly in my view) identifies socialism as evil and (correctly in my view) identifies socialism as being at the ideological core of the Democratic Party, that does not justify fully embracing the mess of contradictions, bad ideas and (isolated) good ideas that constitutes a partisan Republican political identity simply because the Republicans are (nominally) the non-socialists.

Perhaps it's possible to work for change within one or both political parties to replace the current mixed- to fundamentally-anti-freedom core ideologies, bad positions and contradictions with something essentially good and pro-freedom. Perhaps it makes sense tactically to support one party or the other (or their candidates) at certain times or on certain issues in pursuit of a long term pro-freedom agenda. But to do so does not require one to don an elephant's trunk or a donkey's tail.

I sympathize with people who want to see a more secular, diverse, less cronyistic society but feel forced to accept a package deal that includes socialism if they want to find a political home in one of the two major parties. I similarly sympathize with principled, freedom-loving people who previously found common cause and even a voice within the Republican Party but now find their party lead by a vulgar, amoral economic nationalist. It can be jarring and dispiriting to feel like you have no 'home' politically. Even more so if one goes from having a 'home' politically to suddenly having none.

Especially with regards to the Republican Party, this last point is worth further attention because the turn for the worse was so rapid and so recent as to be disorienting. It always would have been a mistake to go 'all in' on the Democrats or Republicans, even if one or the other was better on certain (or the balance) of the issues. But it's an even bigger mistake to go 'all in' on the Republicans now that they've transformed into something that is, in its core principles and on the balance of the issues, at least as bad as the Democrats and is, arguably, the greater threat of the moment because they happen to be the party in power.

No amount of concern for positive values (such as freedom) or concern that the 'other guys' will advance negative ones (such as socialism) justifies going 'all in' on the Republicans. In fact, it's unclear that even allying with them tactically at present out of concern for those values is prudent since the recent shift in the party is precisely away from those values. It's unclear what can be accomplished by throwing one's lot in with such 'allies', other than inadvertently rewarding them for turning in the wrong direction. Of course, none of that is to imply that one ought to become a partisan Democrat instead. Joining a tribe -- or even accepting the idea that tribalism is required -- is far from the only alternative.

(A related error is to assume that because, e.g., nothing has changed on the Democratic side, the Republican side must still be the better alternative. But this is like saying 'The unpleasant odor is still present on the other side of the room, therefore I'm going to stay on this side even though it has suddenly become fully engulfed in flames.' Perhaps avoiding the bad smell was the right choice at one time. But maybe now it would be better to endure the stink. Or perhaps leave the room entirely.)

More fundamentally, it's a mistake -- though an understandable and easy-to-make one -- to fail to identify and accept the present reality of what both parties are. It's a mistake to support or hitch one's wagon to one party or the other just because one previously has, either uncritically, out of inertia or for failure to adjust one's evaluations in response to changing circumstances (even though it can be hard to process the changes and update the evaluation). And most crucially, it is downright dangerous to go 'all in' on fundamentally flawed parties in a way that implies one becoming or genuinely causes one to become a member of a political tribe whose core values and actions ultimately promote the destruction of one's values.

Tuesday, July 4, 2017

Independence Day in Trump's America

This Independence Day, I am grateful to live in a country led by a strong, fearless, authoritarian figure who can fix everything and solve all our problems. I am grateful that America has finally realized the promise of its founding and elected a reality show entertainer and expert Tweeter to its highest office. I am thankful that instead of conventional politicians, we finally have a man in the White House who understands the common people with the unique perspective that only living in a gilded penthouse can provide.

I weep patriotic tears of joy at the courage our Great Leader displays in taking on the true enemies of our nation: a free press and our country's court system. May multitudes of fireworks spew forth tonight like the torrents of 140-character-truthbombs he targets at the hearts of these un-American swine.

I thank the Great Leader and his Great Collaborators in Congress for fighting dangerous ideas like the separation of church and state. I thank them also for working hard to correct the errors of our Founding Fathers who, let's face it, were pretty cool but could they really be as amazing as the Great Leader? Had John Adams lived under the tyranny of Barack Obama rather than George III, he surely would have appreciated the benefits of a government of men -- succesful, non-loser, high-energy, nonconsenual-female-genitalia-grabbing men -- not laws.


I look skyward, not only at the pyrotechnics (which are awesome, btw!), but also towards a future where our Great Leader will tweet America to realize its true potential as a nation of non-immigrants with massive social programs and expansive government controls that benefit the true Sons and Daughters (but mostly Sons) of Liberty: native-born people who basically look like me and simultaneously think they were born in the greatest country in the world yet have somehow gotten a raw deal.