Since this is apparently becoming an increasingly sporadic and PC building-focused blog, I feel compelled to comment on the recent controversy surrounding LTT and Linus Media Group's hardware reviews and other practices. GamersNexus' video lays it all out nicely.
- Full disclosure: I'm a huge admirer of what LMG has built and, in general, the way they've grown and run their business. Building what Linus and his team have built is no insignificant achievement and the rising tide they've created in the tech YouTuber space has risen a lot of boats.
- While I may not agree with every position he takes or decision he makes, I believe Linus to be a highly ethical person who operates from a strong personal moral compass. Again, his compass and mine don't align 100% of the time, but I'm saying I think he is a scrupulous dude.
- That being said, I do think LMG's 'quantity over quality' approach is leading to many of the errors and questionable behavior that Steve is talking about. As the LMG team says for themselves, that strategy probably made sense as LMG was growing, but it's not clear that it's necessary or optimal now that the company is worth over $100mm.
- Being that big creates an obligation for LMG to recognize that its actions and mistakes can have a massive impact on smaller partners, businesses and other creators. This is the focus of GN's criticisms in the second part of the video and the part that resonates most deeply with me.
- Parenthetically, this sort of takedown piece is very on-brand for GN. There's a lot GN does that I find valuable, but the 'self-appointed guardian of ethics in the PC hardware community' shtick wears thin sometimes.
- The reviewer is typically working with only one review sample of the product;
- That review sample is provided by the manufacturer relatively close to the product launch, limiting the time the reviewer has to test and evaluate the product;
- The reviewer is under an NDA and embargo (usually lasting until the product launch date), limiting the reviewers' ability to share data and conclusions with each other during the narrow window the day-one reviewers have to test.
- Component selection: Though Cooler A outperforms Cooler B on the high-TDP CPUs reviewers typically used for controlled testing, the advantage might disappear with a lower-TDP CPU that both coolers can cool adequately. Alternatively, as we've seen with recent high-TDP CPUs, the limiting factor in the cooling chain tends not to be anything about the cooler (assuming it's rated for that CPU and TDP) but rather the heat transfer capacity of the CPU's IHS. I recently switched from a NH-U12S (with two 120mm fans) to an NH-D15 (with an extra fin stack and two 140mm fans) in my 5800X3D system and saw no improvement in idle thermals with the fans in both setups at 100% load, I suspect because of this very issue.
- Mount quality: CPU coolers vary greatly in ease of installation. So even if Cooler A outperforms Cooler B when mounted properly, if Cooler A's mounting mechanism is significantly more error-prone (especially in the hands of an inexperienced user), that advantage may be lost. In fact, if Cooler B's mounting mechanism is significantly easier to use or less error-prone, it might actually outperform Cooler B for the majority of users because more of them will achieve a good mount. The same applies to...
- Thermal compound application: Not only might a given user apply too much or too little thermal compound (where a reviewer is more likely to get it right), but, more deeply, the quality of the application and spread pattern can vary substantially between installation attempts, even among experienced builders, including, I would add, professional reviewers. Anyone who has built multiple PCs has had the experience of having poor CPU thermals, changing nothing about their setup other than remounting the CPU cooler (seemingly doing nothing differently) and seeing a multi-degree improvement in thermals. Outlets like GN providing contact heatmaps as part of their rigorous testing is a nod to this issue, but they typically only show the heatmaps for two different mounting attempts (at least in the videos), and that seems like too small a sample size to be meaningful. This brings up the issue of...
- Manufacturing variance from one unit of the same product to another: At most, these outlets are testing two different physical units of the same product, and frequently just one. I don't know this, but I suspect that because good contact between the CPU heat spreader and cooler coldplate is such a key factor in performance, the quality and smoothness of the coldplate matters a lot, and is exactly the kind of thing that could vary from one unit to another due to manufacturing variance. All other things being equal, a better brand/sku of cooler will have less unit-to-unit variance, but the only way to determine this would be to test with far more than one or two units, which none of these reviewers does (and, indeed, none can do with just one review sample provided by the manufacturer). Absent that data, it's very similar to the silicon lottery with chips: your real-world mileage may vary if you happen to win (or lose) the luck-of-manufacturing draw.
- Ambient temperature and environmental heat dissipation: Proper laboratory conditions for cooler testing involve controlling the ambient environmental temperature. That means keeping it constant throughout the test, which means that the test environment must have enough capacity to eliminate the heat the test bench is putting out (along with any other heat introduced into the test environment from the outside during the test period, like from the sun shining through the windows during the test). If the user's real-world environment also has this capacity, the test results are more likely to be applicable. If, on the other hand, the real-world environment can't eliminate the heat being introduced (say it lacks air conditioning, is poorly ventilated or has lots of heat being introduced from other sources), it changes the whole picture. Fundamentally, ambient temperature is a factor a responsible reviewer must control for in a scientific test. However, it is almost never controlled for in real-world conditions. And, arguably, the impact of uncontrolled ambient temperature is one of the most significant factors affecting quality of life in the real world (the other being noise, on which see below). From a certain point of view, PC cooling is about finding a balance where you get heat away from your components fast enough that they don't thermal throttle (or exhibit other negative effects of heat) but slow enough that you don't overwhelm the surrounding environment's ability to dissipate that heat. If the PC system outputs heat faster than the outside environment can dissipate it, the outside environment gets hotter, which sucks for your quality of life if you're also in that environment and trying to keep cool. This is why, considering only this issue, a custom water cooling solution with lots of fluid volume would yield a higher quality of life for most users than, e.g., a single tower air cooler. The greater thermal mass and conductivity of the fluid vs. the air cooler's heat pipes and fin stack allows for more heat to get away from the components quickly but remain internal to the system and then transferred into the environment over time, which is a better match for the primary ways we cool our environments (like air conditioning), which are better at dissipating relatively even, rather than spikey, heat loads.
- Case and case airflow: I think this is by far the most significant factor in the real world. Any relative performance difference between Coolers A and B under laboratory conditions can easily be wiped out or reversed when either cooler is placed in a particular setup with particular airflow characteristics. Both coolers might perform great in a case with stellar airflow and perform poorly in one that is starved for airflow. But, more deeply, certain cooler designs perform better under certain case airflow conditions than others. An AIO where the radiator's fans can't create enough static pressure to overcome the case's airflow restrictions won't realize its full performance potential. Reviewers (rightly) try to create consistent test conditions that are fair to all the products being tested, but your setup probably isn't.
- The compatibility and rated performance of the cooler for a given CPU and case/mobo. This is spec sheet stuff, though some level of testing validation is valuable.
- How easy and foolproof the mounting mechanism is, which is best surfaced through an on-camera build demonstration, not rigorous testing. Here, I find build videos far more valuable than product reviews, because if you see an experienced YouTuber struggling to mount a dang cooler, it should at least give you pause. I'd also note that build videos are inherently more entertaining than product reviews, because it's compelling to watch people struggle and overcome adversity, and even more fun when they do so in a humorous and good-natured way, which is a big part of the secret sauce of folks like Linus and PC Centric.
- The noise level of any included fans when run at, say, 30%, 50%, 80% and 100% load. This might be idiosyncratic to me (though I suspect not), but I'm particularly sensitive to fan noise. Given that the cooler can fit in my case and can handle the output of my CPU, what matters to me is how noisy my whole system is going to be, both at idle and under load. With any cooler, I assume I'm going to have to tune the curves of both its fan(s) and my case fans to find the best balance of noise and cooling across different workloads (e.g idle, gaming load, full load). I can't possibly know how this will end up in my build in advance, and rigorous testing under laboratory conditions doesn't help me. So the best I can hope for from a reviewer is to give me a sense of how much noise the cooler's fans will contribute to overall system noise at various RPM levels. (This is the primary reason I favor Noctua fans and coolers and am willing to pay a premium for them: they are super quiet relative to virtually all competitors at either a given RPM or thermal dissipation level. And it's the primary advantage of switching to the D15 in my current setup, since the larger fans and dual tower design mean it can dissipate more heat with less noise than the U12S.)
- The tester committed human error on the outlier test runs, in which case they should try to track it down and correct it, or else throw out the results of those five trials.
- The testing methodology fails to account for some confounding factor that was present in those five cases and not the others, in which case the reviewer ought to track that down and control for it if possible.
- The individual unit being tested (remember, these reviewers are typically testing only one unit of the product being evaluated) exhibits weird behavior. Technically, this is an instance of (2) because something must be causing the particular unit to behave oddly, it's just that the reviewer hasn't been able to control for that something. And given time constraints on day-one reviews especially, this is when an individual reviewer is most likely to say 'I don't know... maybe I have a defective unit here, but I can't be sure and don't have the time or resources to investigate further.'