As there is ongoing drama with Zen 5 and power issues, there are people with the instruments and the motivation to investigate this. You should consider contacting Gamers Nexus, and help them to get your test suite running. They can measure power draw and do a thermal analysis of this CPU, and they'd likely be eager to do it, given the possibility of making a bunch of dramatic YouTube content about design flaws in widely used hardware. That's pretty much their whole schtick in recent years.
> Modern CPUs measure their temperature and clock down if they get too hot, don't they?
Yes. It's rather complex now and it involves the motherboard vendor's firmware. When (not if) they get that wrong CPUs burn up. You're going to need some expertise to analyze this.
> [...] a bunch of dramatic YouTube content [...]
That framing doesn't do him and the team justice. There is (or better, was) a 3.5h long story about NVIDIA GPUs finding their ways illegaly from the US to China, which got taken down by a malicious DMCA claim from Bloomberg. It is quite interesting to watch (Can be found archive.org).
GN is one of the last pro-consumer outlets, that keep on digging and shaking the tree big companys are sitting on.
For the record, I think GN is excellent and highly credible.
> which got taken down
Not everywhere:
https://archive.org/details/the-nvidia-ai-gpu-black-market-i...
That copy is missing the chapters. Here they are:
00:00:00 - The NVIDIA AI GPU Black Market
00:06:06 - WE NEED YOUR HELP
00:07:41 - A BIG ADVENTURE
00:10:10 - Ignored by the US
00:11:46 - BACKGROUND: Why They're Banned
00:16:04 - TIMELINE
00:21:32 - H20 15 Percent Revenue Share with the US
00:26:01 - Calculating BANNED GPUs
00:29:31 - OUR INFORMANTS
00:31:47 - THE SMUGGLING PIPELINE
00:33:39 - PART 1: HONG KONG Demand Drivers
00:43:14 - PART 1: How Do Suppliers Get the GPUs?
00:48:18 - PART 1: GPU Rich and GPU Poor
00:56:19 - PART 1: DATACENTER with Banned GPUs, AMD, Intel
01:06:19 - PART 1: Chinese Military, Huawei GPUs
01:09:48 - PART 1: How China Circumvents the Ban
01:19:30 - PART 1: GPU MARKET in Hong Kong
01:32:39 - WIRING MONEY TO CHINA
01:36:29 - PART 2: CHINA Smuggling Process
01:43:26 - PART 3: SHENZHEN's GPU MIDDLEMEN
01:50:22 - PART 3: AMD and INTEL GPUs Unwanted
01:56:34 - PART 4: THE GPU FENCE
02:06:01 - PART 4: FINDING the GPUs
02:15:12 - PART 4: THE FIXER IC Supplier
02:21:12 - PART 5: GPU WAREHOUSE
02:27:17 - PART 6: CHOP SHOP and REPAIR
02:34:52 - PART 6: BUILD a Custom AI GPU
02:56:33 - PART 7: FACTORY
03:01:01 - PART 8: TAIWAN and SINGAPORE Intermediaries
03:02:06 - PART 9: SMUGGLER
03:05:11 - LEGALITY of Buying and Selling
03:08:05 - CORRUPTION: NVIDIA and Governments
03:26:51 - SIGNOFF
Given that Gamers Nexus needs the ad revenue wouldn't linking to a re-upload that wouldn't give them any of that be sort of bad?
YouTube ad revenue isn't as high as you'd think. A very significant part of their income comes from in-video sponsors and merchandise sales.
While they can't publish it themselves this at least achieves the goal of the information being spread, along with the knowledge that it was their investigative team that did the work in the first place.
But yes, once they reedit and republish themselves (or manage some sort of appeal and republish as-is) then of course linking to that (and a smaller cut of the parts they've had to change because Bloomberg were litigious arseholes, if only to highlight that their copyright claim here is somewhat ridiculous) would be much better.
They do not "need" it since that movie was crowd funded with over 400k anyways and AdSense are pittance in comparison. They also have indirectly promoted that reupload.
When something is uploaded to the internet, it won't be easy to take it down.
Ask Beyonce.
Can you explain further about Beyoncé? Do you mean the elevator video where her sister attacks Jay Z?
Or Barbra Streisand
Wendell at Level1Techs often goes more in-depth on the software testing and datacenter use-case analysis through partnerships with friends who run lots of machines in datacenters.
GN is unique in paying for silicon-level analysis of failures.
der8auer also contributes a lot to these stories.
I tend to wait for all 3 of their analyses, because each adds a different "hard-won" perspective.
The small coolers used by them are not recommended by Noctua for 9950X. Noctua recommends only bigger coolers for 9950X, which dissipates 200 W permanently on a workload like theirs (which is much less than the over 250 W dissipated in similar conditions by the competing Intel CPUs).
Despite this, the overtemperature protection of the CPUs should have protected the CPUs and prevent any kind of damage like this.
Besides the system that varies continuously the clock frequency to keep the CPU within the current and power consumption limits, there is a second protection that stops temporarily the clock when a temperature threshold is exceeded. However, the internal temperature sensors of the CPUs are not accurate, so the over-temperature protection may begin to act only at a temperature that is already too high.
So these failures appear to have been caused by a combination of not using the appropriate coolers for a 200 W CPU, combined with the fact that AMD advertises a 200-W CPU as an 170-W CPU, fooling naive customers into believing that smaller coolers are acceptable, and with either some kind of malfunction of the over-temperature protection in these CPUs or with a degradation problem that happens even within the nominal temperature range, but at its upper end.
He's a bit sensationalist, yes, but I am thankful that he saved us from buying affected Intel CPUs.
He's a "student" and friend of late Gordon Mah Ung. He's carrying his torch forward.
This was Gordon's style, and Steve is continuing it. He has the courage to hit Bloomberg offices with a cameraman, so I don't think his words ring hollow.
We need that kind of in your face, no punches held back type of reporting when compared to "measured professionals".
Absolutely - this is the sort of direct citizen journalism I expect (sort of hope?) we'll see more and of as traditional investigative journalism dies its slow death.
Yes. When he's right, he's right. However the main issue I have with GN is how Steve tends to go full Leeroy Jenkins pitchforks and torches for 9 out of every 5 actual scandals in the tech industry.
When it comes to interpersonal drama, the "Shoot first, ask questions later" style of reporting is terrible. However, for consumer advocacy it's basically the opposite, especially because in most cases it's easy for companies to turn the narrative around by simply handling the issue well. It's almost more about how they handle it than the actual issue in many cases.
I felt the same way, but over time I have come to respect those with the Crusader personality archetype, we need these people to do their thing and they need us to balance them out.
Not sure of sensationalist or just doing great reporting. I take him as one of the last good tech journalists on the platform.
GN wasn't the first to break the story the 13/14th gen was defective. The thousands and thousands of users experiencing the issues collectively noticed pretty quick. If anything, there was a period where he was saying "We've talked to Intel but we won't say anything yet until they do."
Dude has a cult of personality going and I've learned not to question it.
In general PC enthusiasts have always treat it these corporations bit like sports teams.
The only real problem with GN is Steve is a bit of an egotist when it comes to content creators who do less technical analysis, like LTT or Jayz.
He never really got over the stuff with Linus and doubled down on stupid things. I think they both have a great place in the tech scene and LTT's videos of recent have been a lot better quality and researched then yesteryear.
AMD has failed to be reliable with its Zen 4 and Zen 5 consumer CPUs, just at the same time Intel did the same with their 13k and 14k higher end CPUs.
AMD is somewhat worse than Intel as their DDR5 memory bus is very "twitchy" making it hard to get the highest DDR5 timings, especially with multiple DIMMs per channel.
I don't think it's reasonable to call memory timing tweaking stability issues worse than a cpu dying from heat under normal usage.
All you really need to see is the picture of the CPU with thermal paste only on one half. Thermal throttling is tuned to work when there is 1. a sufficient heatsink (theirs was significantly below requirements) and 2. it is installed correctly so that its triggers for downclocking happen with the correct timing. This is just another instance of ridiculous PEBCAK error
This is per design. On AM5 processors, there's a hotspot on the lower half of the processor where the dies that contain the CPU cores are located. Noctua recommends that AM5 users mount their coolers shifted towards the lower side of the processor for optimal cooling performance, see https://noctua.at/en/offset-am5-mounting-technical-backgroun... . You may have missed the paragraph in the article that explicitly points this out:
> We use a Noctua cooling solution for both systems. For the 1st system, we mounted the heat sink centred. For the 2nd system, we followed Noctua's advice of mounting things offset towards what they claim to be the hotter side of the CPU. Below is a picture of the 2nd system without the heat sink which shows that offset. Note the brackets and their pins, those pins are where the heat sink's pressure gets centred. Also note how the thermal paste has been squeezed away from that part, but is quite thick towards the left.
While it is noctua advice, I don't think AMD supports that view, so it would seem correct to at least test the cpu the way the vendor recommends before making conclusions
Clearly paste was squeezed out from the entire perimeter of the CPU. Offset mounting is used intentionally for this CPU.
Probably there's less paste remaining on the south end of the CPU because that's where the mounting force is greatest.
If anything, there's too much paste remaining on the center/north end of the CPU. Paste exists simply to bridge the roughness of the two metal surfaces, too much paste is a bad sign.
My guess is that the MB was oriented vertically and that big heavy heat sink with the large lever arm pulled it away from the center and north side of the CPU.
IMO, the CPU is still responsible for managing its power usage to live a long life. The only effect of an imperfect thermal solution ought to be proportionally reduced performance.
I'm not as sure about AMD CPUs (and they were known for having far worse overheat behaviour back in the early 2000s) but there are plenty of stories of Intel CPUs working for many years, sitting at the thermal limits, with the (stock) heatsink not even in contact, thanks to their cheap push-pin retention mechanism.
They don't say what temperature the CPU was reporting which seems like an odd omission. Whatever the specs of your cooler etc check the temperature it's actually running at. Go by what the CPU is saying! I've got the older 3950x, and the first one died after a few months (still in warranty) with a cooler in spec, but it would go into the 90s at full load just doing big builds. I replaced the heatsink with a basic watercooler when the replacement chip arrived and it's running at least 20c cooler at full load.
Maybe they didn't have anything logging the temperature. They didn't expect it to die after all.
> The so-called TDP of the Ryzen 9950X is 170W. The used heat sinks are specified to dissipate 165W, so that seems tight.
TDP numbers are completely made up. They don’t correspond to watts of heat, or of anything at all! They’re just a marketing number. You can't use them to choose the right cooling system at all.
https://gamersnexus.net/guides/3525-amd-ryzen-tdp-explained-...
When I see the term TDP, I remember what I have read in the "Thermal Design Document" of Intel Core2Quad Q6600 and the family it belongs:
> The thermal solution bundled with the CPUs is not designed to handle the thermal output when all the cores are utilized 100%. For that kind of load, a different thermal solution is strongly recommended (paraphrased).
I never used the stock cooler bundled with the processor, but what kind of dark joke is this?
Most states of “100% utilization” as you’d see in `top` are not 100% thermal output or even close. Cores waiting for memory accesses count as utilized in the former sense but will not produce as much heat as one that is actually using the ALU etc. That’s why special make-work like Prime95 is used for stress testing overclocking/thermals: it will saturate the cores with enough unblocked arithmetic work to generate more heat than having 1000 browser tabs open does.
Man that was a beast of a CPU back in the day.
The Conroe Intel era was amazing for the time.
That was such a fun time to be into hardware. For years Intel had the money and relationships to keep the Pentium 4 everywhere even though AMD had the better product. The P4 might edge ahead in video rendering but the Athlon would win overall and use less power.
Then Conroe launched and the balance shifted. Even the cheapest Core2Duo chips were competitive against the best P4s and the high-end C2Ds rivaled or beat AMD. https://web.archive.org/web/20100909205130/http://www.anandt...
AND those chips overclocked to the moon. I got my E6420 to 3.2ghz (from 2.133ghz) just by upping the multiplier. A quick search makes me think my chip wasn't even that great.
I always used the stock cooler, because it's quiet and nothing uses the cpu to its fullest :).
You are correct. In fact these guys measured a maximum socket power consumption of 240 watt using a 9950X at stock settings, running prime95. So far above the "170 watt" TDP:
https://hwbusters.com/cpu/amd-ryzen-9-9950x-cpu-review-perfo...
I don’t understand this argument. If the CPU dissipated an equal number of watts of heat energy as it consumed from the wall, there wouldn’t be any energy left to do actual useful work. Isn’t the extra 100W accounted for by things like changing the state of flip-flops? In other words, mustn’t one consider the entropy reduction of the system as an energy sink?
I think the numbers are more like <1W used in actual information processing, >239W lost to heat. Information and the transformation of it does have some inherent energy cost. But it is very, very small. And you end up getting that back as heat somewhere else down the line anyways.
Clocking and changing register states requires charging and discharging the gate capacitance of a bunch of MOSFET transistors. The current that results from moving all that charge around encounters resistance, which converts it to heat. Silicon is only a "semi" conductor after all.
You are correct that there is energy bound in the information stored in the chip. But last I checked, our most efficient chips (e.g., using reversible computing to avoid wasting that energy) are still orders of magnitude less efficient than those theoretical limits.
Thank you for encouraging me to go on this educational adventure. I have now heard of Landauer’s principle, which says each bit of information releases 2.9e-21 joules when destroyed: https://en.wikipedia.org/wiki/Landauer%27s_principle
Nope. Remember that you cannot destroy energy. The energy you use to flip the flip flop still exists, only now it’s just disordered waste heat instead of electricity.
To be a bit flippant, you can absolutely destroy energy by creating some mass..
Then again most of us do not have particle accelerator nearby looking for Higgs boson.
>> The energy you use to flip the flip flop
> To be a bit flippant
I see what you did here :)
Energy cannot be created or destroyed, but it can enter and leave an open system. When I lift a 10kg box 1 meter in the air, I don’t raise its temperature at all, and I only raise mine a tiny bit, yet I have still done work on the box and therefore have imparted it energy. The energy came from food I ate earlier, and was ultimately stored in the box as gravipotential energy.
Is this not analogous to storing energy in the EM fields within the CPU?
Yes, but only briefly. When you study the thermodynamics of information you’ll discover that it’s actually erasing information that has a cost. Every time the CPU stores a value in a register it erases the previous value, using up energy. In fact, every individual transistor has to erase the previous state on basically every clock cycle.
Curiously there is a minimum cost to erase a single bit that no system can go below. It’s extremely small, billions of times smaller than the amount of energy our CPUs use every time they erase a bit, but it exists. Look up Landauer’s Limit. There is a similar limit on the maximum amount of information stored in a system which is proportional to the surface area of the sphere that the information fits inside. Exceed that limit and you’ll form a black hole. We’re no where near that limit yet either.
>In fact, every individual transistor has to erase the previous state on basically every clock cycle.
This is incorrect in both directions.
Only transistors whose inputs are changing have to discharge their capacitance.
This means that if the inputs don't change nothing happens, but if the inputs change then the changes propagate through the circuit to the next flip flop, possibly creating a cascade of changes.
Consider this pathological scenario: The first input changes, then a delay happens, then the second input changes so that the output remains the same. This is known as a "glitch". Even though the output hasn't changed, the downstream transistors see their input switch twice. Glitches propagate through transistors and not only that, if another unfortunate timing event happens, you can end up with accumulating multiple glitches. A single transistor may switch multiple times in a clock cycle.
Switching transistors costs energy, which means you end up with "parasitic" power consumption that doesn't contribute to the calculated output.
CPUs don't store nontrivial amounts of energy, and even if storing a 1 was a significantly higher energy level than a 0 (or vice versa) there's no plausible workload that would be causing the CPU to switch significantly more 0s to 1s than 1s to 0s (or vice versa).
I have a 65W TDP CPU, and the difference in power draw (measured at the outlet) from idle to full CPU load is over 100W; it seems to just raise the clock until it hist 95C, so if I limit the CPU fan's top speed, the power draw goes down.
Yep. Modern CPUs continually adjust their clock multiplier based on what their temperature is doing, plus a few timers. If you have a better cooler then you’ll get more performance out of the same CPU, but at the cost of drawing more power and producing more heat.
Wow, I can't believe how BS this TDP is! I feel like a total idiot! I've always assumed it's sorta-kinda a tight upper bound on power consumption, perhaps with some allowance for "imperfections" in the dissipation properties of the CPU, and that I shouldn't sweat the details.
Couldn't this count as false/misleading advertizing though?
It's thermal design power, ie. it's the power that it's designed for, not absolute max.
No, they don’t design the chip with these numbers in mind. The marketing department picks the number they want based on how they want customers to think about the chip, and which competitors they want you to compare it against. They just plug in whatever numbers are needed into the formula so that the number comes out how they want it.
>The marketing department picks the number they want based on how they want customers to think about the chip, and which competitors they want you to compare it against. They just plug in whatever numbers are needed into the formula so that the number comes out how they want it.
Are you just describing product segmentation? ie. how the ryzen 5700x and 5800x are basically the same chip, down to the number of enabled cores, except for clocks and power limit ("TDP")?
Yep. The 5800X is a higher bin specifically because it can clock higher than the ones in the 5700X bin. That certainly makes them draw more power, so they give them a higher TDP number too. But the TDP doesn’t have anything to do with how much power the cpu will draw or how much heat it will generate in practice. Those numbers vary quite a lot; the CPU continuously adjusts it’s own frequency multiplier based on it’s own measured temperature, meaning it’ll draw more power if you cool it better.
>But the TDP doesn’t have anything to do with how much power the cpu will draw or how much heat it will generate in practice. Those numbers vary quite a lot; the CPU continuously adjusts it’s own frequency multiplier based on it’s own measured temperature, meaning it’ll draw more power if you cool it better.
I don't get it, are you referring to the phenomenon that different workloads have different power consumption (eg. a bunch of AVX512 floating point operations vs a bunch of NOPs), therefore TDP is totally made up? I agree that there's a lot of factors that impact power usage, and CPUs aren't like a space heater where if you let it run at full blast it'll always consume the TDP specified, but that doesn't mean TDP numbers are made up. They still vaguely approximate power usage under some synthetic test conditions, or at the very least is vaguely correlated to some limit of the CPU (eg. PPT limit on AMD platforms).
No, the TDP number doesn’t even vaguely approximate anything. You can’t use the number to predict anything, or to plan, or to estimate your electric bill, or anything like that.
Apparently that’s not actually true?
Which part?
All of it.
That seems a little too cynical. It matters how a customer might use a chip, such as the type of cooling that would be expected in a typical system using that model, and that's informed by the advertised specifications. Base clocks and the amount of SRAM also figure into TDP. No doubt there are completely arbitrary aspects to TDP driven purely by profit-focused market segmentation, but it's not just that.
That said, it's definitely very frustrating as someone who does the occasional server build. Not only does TDP not reflect minimum or maximum power draw for a CPU package itself, but it's also completely divorced from power draw for the chipset(s), NICs, BMCs (ugh), etc, not to mention how the vendor BIOS/firmware throttles everything, and so TDP can be wildly different from power draw at the outlet. The past 5 years have kind of sucked for homelab builders. The Xeon E3 years were probably peak CPU and full-system power efficiency when accounting for long idle times. Can you get there with modern AMD and Intel chips? Maybe. Depends on who you ask and when. Even with identical CPUs, differences in motherboard vendor, BIOS settings, and even kernel can result in drastically different (as in 2-3x) reported idle power draw.
No, clock speed and cache have nothing to do with TDP. AMD uses a simple formula to calculate TDP. It is the temperature of the IHS minus the air temperature measured at the cpu cooler’s intake fan, divided by a conversion faction in °C/W.
But they don’t use real temperatures from real systems. They just make up a different set of temperatures for each CPU that they sell, so that the TDP comes out to the number that they want. The formula doesn’t even mean anything, in real physical terms.
I agree that predicting power usage is far more difficult than it should be. The real power usage of the CPU is dependent on the temperature too, since the colder you can make the CPU the more power it will voluntarily use (it just raises the clock multiplier until it measures the temperature of the CPU rising without leveling off). And as you said there are a bunch of other factors as well.
Huh, I always thought it was “total dissipated power”. Like you’d use to spec a power supply.
> Couldn't this count as false/misleading advertizing though?
For what, exactly? TDP stands for "thermal design power" - nothing in that means peak power or most power. It stopped being meaningful when CPUs learned to vary clock speeds and turbo boost - what is the thermal design target at that point, exactly? Sustained power virus load?
Its pretty insane to see someone say something like: “TDP is about thermal watts, not electrical watts. These are not the same.” Watts are watts.
But yeah, TDP means nothing. If you stick plenty of cooling and run the right motherboard board revision your "TDP" can be whatever you want it to be until the thing melts.
"TDP is about average watts, not peak watts" would be an honest way of saying it.
But in the end that's still not actually true in many modern desktop chips. You can take a 65W part, and with a "stock" motherboard firmware, good cooling, and the right workload end up averaging way more than 65W. Or if you have it in a hot room it just might end up using less than 65W.
TDP is more of a rough idea of how much power the manufacturer wanted to classify the part as. It ultimately only loosely relates to the actual heat or electrical usage in practice.
The room temperature or precise way the paste was applied should not matter. Modern CPUs have very advanced dynamic voltage and frequency scaling (DVFS), which accounts for several sensors, including temperature.
These big x86 CPUs in stock configuration can throttle down to speeds where they can function with entirely passive cooling, so even if the cooler was improperly mounted, they'd only throttle.
All that to say, if GMP is causing the CPU to fry itself, something went very wrong, and it is not user error or the room being too hot.
This was my first question as well- I thought it had been a long, long time since you could fry a CPU by taking away the heatsink.
As in... what, AMD K6 / early Pentium 4 days was the last time I remember hearing about cpu cooler failing and frying a cpu?
Athlon era when AMD had no IHS but Intel had one. Intel also had thermal controls that AMD lacked.
I once worked on a piece of equipment that was running awful slow. The CPU was just not budging from its base clock of 700Mhz. As I was removing the stock Intel cooler, I noticed it wasn't seated fully. Once I removed it and looked I saw a perfectly clean CPU with no residue. I looked at the HSF, the original thermal paste was in pristine condition.
I remounted the HSF and it worked great. It ran 100% throttled for seven years before I touched it.
This infamous video: https://www.youtube.com/watch?v=06MYYB9bl70
It was some time around then. I remember AMD being late to it vs Intel.
That was SpeedStep? By the time AMD got to it it was just sort of expected and didn't have a fancy name, as far as I know.
Or maybe I'm thinking of something else entirely…
Yes, this is the point - software should never be able to physically damage the hardware it is on.
If it can, then the hardware is to blame.
As a FW engineer, my software has released the magic smoke a lot.
If the throttling is not stable it could increase stress on the part by creating a bunch of transient but large thermal cycles through the chip. It would need to have some kind of exponential backoff on throttle so it doesn't immediately try to raise the frequencies when the temperature slightly dips.
I would be interested to see if they had the same result with PTM7950 thermal material instead of paste. I've seen significantly better temps with these modern phase-change compounds, and they essentially eliminate application errors.
A quick search on the NH-U9S shows it's a compact cooler for small systems, rated for up to 140 W (see e.g. [1]).
The 9950X's TDP (Thermal Design Power) is 170 W, its default socket power is 200 W [2], and with PBO (Precision Boost Overdrive) enabled it's been reported to hit 235 W [3].
[1] https://www.overclockersclub.com/reviews/noctua_nh_u9s_cpu_c...
[2] https://hwbusters.com/cpu/amd-ryzen-9-9950x-cpu-review-perfo...
[3] https://www.tomshardware.com/pc-components/cpus/amd-ryzen-9-...
Noctua does not use TDP for their heatsinks and instead have CPU compatibility charts. They say it's fine, with "medium turbo/overclocking headroom". https://ncc.noctua.at/cpus/model/AMD-Ryzen-9-9950X-1831
That’s a good catch, but don’t modern CPUs thermally throttle, rather than risk damage? Not that you should rely on this with an underpowered cooling solution but I would expect worse performance, not a fried chip.
Not really a lot it can do rapidly enough if there's only thermal paste on half the CPU.
It sounds like the user likely did the opposite of the "offset seating" of the heatsink that Noctua recommended.
Most likely it's the motherboard. ASRock is getting nailed right now for unstable XMP and CPU voltages (it's recommended to undervolt a little just in case).
The Asus Prime B650M motherboards they are using aren't exactly high end.
Yikes, this is the cheapest motherboard and failed Hardware Unboxed VRM tests. https://youtu.be/DTFUa60ozKY?t=744
conversely the asrocks actually did pretty good in that test...
My friend just had an ASRock board cook his AMD CPU. Apparently a very common problem.
Can you link to a reputable source for what settings I should use on my asrock motherboard? I'd like to avoid this.
No more than 1.2 volts on vsoc... but YMMV.
"According to new details from Tech Yes City, the problem stems from the amperage (current) supplied to the processor under AMD's PBO technology. Precision Boost Overdrive employs an algorithm that dynamically adjusts clock speeds for peak performance, based on factors like temperature, power, current, and workload. The issue is reportedly confined to ASRock's high-end and mid-range boards, as they were tuned far too aggressively for Ryzen 9000 CPUs."
https://www.tomshardware.com/pc-components/cpus/asrock-attri...
And the close-up photos of the socket with pins are missing.
Looking at the AM5 pinout[0], it looks like those pins are VDDCR and VSS. There might be a little bit of PCIe sprinkled in towards the outer edges, but I'm not 100% on the orientation of this pinout vs the orientation of the CPU. I don't know anything about electricity so I've got nothing else to add.
[0] https://upload.wikimedia.org/wikipedia/commons/2/2d/Socket_A...
This is a nice guess but the likelihood that actual silicon area is closely connected to the pins in that area is not so obvious
Isn't almost every other pin going to be power/ground on a high-power chip like this? On both the package and the die.
I don't know about GMP, but I recently built a PC with 9950X3D. As part of initial testing, I ran Prime95 for 48 hours. Everything ran stable, but I noticed that part of the tests, I think it was FFT or something like that, caused incredibly sharp increase in temp. We are talking 60C average in the rest of the test vs immediate (less than a 5 seconds) 95+ degrees when that FFT thingie started. It was very weird.
That's when I discovered actually ancient term "power virus". Anyway, after talking to different people I dismissed this weird behavior and moved on.
Reading this makes me worry I actually burned mobo in that testing.
Different use patterns will result in different temperatures. Very tight math loops (no memory/IO wait) will lead to higher temperatures than something that that relies on L2/3 cache or main memory, even though they’ll both report “100% CPU use” and probably use similar amounts of power. And, different operations will produce heat in different areas of the die; depending on physical layout, some operations might generate heat in a tiny cluster, whereas some others might generate heat in larger spread out areas. Even though both of those cases might use the same amount of power and generate the same amount of heat, the temperatures will be drastically different due to the heat concentration.
Iirc the FFT step uses AVX, and on Zen 5 that’ll be AVX-512. It should keep 100% of the required data in L1 caches, so you’re keeping the AVX units busy literally 100% of the time if things are working right. The rest of the core will be cold/inactive, so if you’re dumping an entire core’s worth of power into a teeny tiny ALU, which is gonna result in high temps. Most (all?) processors downclock under heavy AVX load, sometimes by as much a 1GHz (compared to max boost), because a) the crazy high temperatures results in more instability at higher frequencies, and b) if the clocks were kept high, temperatures would get even higher.
Try LINPACK, it's even more stressful than Prime95.
Could be the power supply and load profile?
I've heard some really wild noises coming out of my zen4 machine when I've had all cores loaded up with what is best described as "choppy" workloads where we are repeatedly doing something like a parallel.foreach into a single threaded hot path of equal or less duration as fast as possible. I've never had the machine survive this kind of workload for more than 48 hours without some kind of BSOD. I've not actually killed a cpu yet though.
Is that, like, an intentional stress-test for the hardware that you’ve come up with?
No. It is just how the algorithms play out:
1. Evaluate population of candidates in parallel
2. Perform ranking, mutation, crossover, and objective selection in serial
3. Go to 1.
I can very accurately control the frequency of the audible PWM noise by adjusting the population size.
I've never had the machine survive this kind of workload for more than 48 hours without some kind of BSOD.
Then you shouldn't trust the results of your work either, as that's indicative of a CPU that's producing incorrect results. I suggest lowering the frequency or even undervolting if necessary until you get a stable system.
...and yes, wildly fluctuating power consumption is even more challenging than steady-state high power, since the VRMs have to react precisely and not overshoot or undershoot, or even worse, hit a resonance point. LINPACK, one of the most demanding stress tests and benchmarks, is known for causing crashes on unstable systems not when it starts each round, but when it stops.
"We suspect that GMP's extremely tight loops around MULX make the Zen 5 cores use much more power than specified, making cooling solutions inadequate."
I feel like if this was heat related, the overall CPU temperature should still somewhat slowly creep up, thereby giving everything enough time for thermal throttling. But their discoloration sure looks like a thermal issue, so I wonder why the safety features of the CPU didn't catch this...
I'm guessing the temperature could increase quite fast (milliseconds or less) in heavy duty areas, especially when going scalar-to-dense-vector operations.
My best understanding of the avx-512 'power license' debacle on Intel CPUs was that the processor was actually watching the instruction stream and computing heuristics to lower core frequency before reaching avx512 or dense-avx2 instructions. I guessed they knew or worried that even a short large-vector stint would fry stuff...
Apparently voltage and thermal sensor have vastly improved and looking at the crazy swings on NVIDIA GPU's clocks seem to agree with this :-)
They said it took months for each CPU to fail. Both systems used the same inadequate heatsink/fan. Then there's also the lower-end motherboards (they are not "top-quality", the brand means nothing) and the miniscule 450W power supply used in the initial configuration, which are confusingly paired with a 16-core CPU and 64/96GB of RAM.
It doesn't strike me as odd that running an extremely power-heavy load for months continuously on such configurations eventually failed.
Are we talking "slowly" in a relative sense? A silicon die of this size has a thermal mass (guessing) around 10⁻³ J/K but a power dissipation rate over 200W, so it can rise from room temperature to junction temperature limits almost instantly.
People without a background in electronics don't appreciate what modern CPUs and GPUs are doing: the amount of current flowing through these devices is just mind blowing. With adequate cooling, a Ryzen 9 9950X is handling somewhere in the neighborhood of 150-200 amps under high load.
I initially scoffed at the 150-200 amps. But I know core voltage is usually in the neighbourhood of 1V so to draw 200W, you really would have to basically be moving 200A of current. That's wild.
Yup. P=IV is really surprising when you get to high power parts at low core voltages. Needless to say, you need lots of transistors and phases on voltage conversion, and you need lots and lots of plane area.
(And,... 200A is the average when dissipating 200W. So how high are the switching currents? ;)
AMD's desktop CPUs are still running at a bit more than 1V; 1.3-1.4V is what you'll see at the high end of the clock speed range. But power draw can easily be in the 250–300W range if you turn on the "PBO" automatic overclocking mode, so 200A is not really the upper bound.
And you're pushing that many amps across a piece of silicon roughly the size of your thumbnail all said and done.
A spot welder, basically.
What's really wild is with all the power scaling features the regulators have to step from zero to hundreds of amps in microseconds with very little overshoot. The power design for these modern systems is demanding.
The /r/asrock reddit is full of such stories. In my case I've burned two server motherboards with those watercooled 9950X chips. The CPU is still fine though. It's happening with all H100's on 100%. Too much power draw we assume.
We don't overclock or overvolt or play other teen games with our hardware.
But doesn't the hardware "overclock" and "overvolt" automatically these days?
This reminds me of the Intel CPUs with similar problems a year ago, and AFAIK it was caused by excessive voltage: https://news.ycombinator.com/item?id=41039708
> But doesn't the hardware "overclock" and "overvolt" automatically these days?
If it's done by the manufacturer it's within spec of course. As designed.
The overclock game was all about running stuff out of spec and getting more performance out of it than it was supposed to create.
Also "play other teen games" should not damage your cpu.
Ehh I've seen some of these teen games involve complete immersion in oil or even water (can be done as distilled water doesn't conduct but if only a pinch of salt gets into it...). Or even more extreme things like liquid nitrogen. This can have all sorts of weird effects on CPUs not designed for that kinda stuff (e.g. thermal contraction to temperatures under low load way below spec, or cracking due to extreme thermal gradients).
I have a Ryzen and it ran fine until one day, after some load, it wont run at all with the virtualisation options turned on anymore.
Having read all I can on the issue its largely been ignored by AMD.
If its some kind of thermal runaway issue that would not surprise me.
I had a bios reset itself to defaults before, and some AMD boards don't have all the required virtualization options on by default.
I only realized this happened because every time I had ever upgraded firmware, I always had to go back and set XMP settings, one or two other things, and than cpu virt option.
Same with secure boot for me. Kinda makes sense that a BIOS upgrade would wipe the config. Its that or manage schema migrations.
> Did the CPUs die of heat stroke? Modern CPUs measure their temperature and clock down if they get too hot, don't they?
They do, but the thermal sensors are spread out a bit. It could be that there's a sudden spot heating happening that's not noticed by one of the sensors in time.
How is that possible? Even if the chip did not get enough cooling it should have been just throttled heavily.
Modern silicon is so dense and heats up so fast that throttling is easier said than done. I think they have to model and predict the thermals ahead of time nowadays, because by the time they could react to a temp sensor alone, the chip might already be toast.
Maybe the throttling circuitry/firmware simply doesn't have enough time to react.
Enthusiast-oriented motherboards often default enable Precision Boost Overdrive, causing higher power and temperature limits for longer periods. To run the CPU at “stock” you need to go in and disable that. Their default Load Line Calibration might be aggressive as well.
This isn't good. Then again, the amount of power going in to these CPUs is way too high.
Take the AlphaServer DS25. It has wires going from the power supply harness to the motherboard that are thick enough to jump a car. The traces on the motherboard are so thick that pictures of the light reflecting off of them are nothing like a modern motherboard. The two CPUs take 64 watts each.
Now we have AMD CPUs that can take 170 watts? That's high, but if that's what the motherboards are supposed to be able to deliver, then the pins, socket and pads should have no problem with that.
Where's AMD's testing? Have they learned nothing watching Intel (almost literally) melt down?
> Take the AlphaServer DS25. It has wires going from the power supply harness to the motherboard that are thick enough to jump a car. The traces on the motherboard are so thick that pictures of the light reflecting off of them are nothing like a modern motherboard. The two CPUs take 64 watts each.
I am not involved in power VRM for modern moderboards. But I can imagine they are some some smart stuff like compensating for transport losses by increasing the voltage somewhat at the VRM so the designed voltage still outputs at the CPU. Of course this will cause some heating in the motherboard but it's probably easily controlled.
In the day of the alpha that kind of thing would have been science fiction so they had no alternative but to minimise losses. You can't use a static overvoltage because then when the load drops the voltage coming out will be too high (transport loss depends on current).
Also, in those days copper cost a fraction of what it costs now so with any problem just doing 'moah copper' was an easy solution. Especially on server hardware like the Alpha with big markup.
And server hardware is always overengineered of course. Precisely to prevent long-term load problems like this.
My Ryzen CPU recently died too! wtf
ASRock motherboard?
Gigabyte
Zen5?
Not that it makes a huge difference since they are supposed to downclock when hot, but what was the actual cooler being used? It doesn't say in the article. My guess is that it's aircooled being only 165W max, but aircooled is not recommended for most newer high end CPUs.
Gradual damage is consistent with over heating. I've seen racks of servers do the same thing.
Overall, there is a continued challenge with CPU temperatures that requires much tighter tolerances both in the thermal solution. The torque specs need to be followed and verified that they were met correctly in manufacturing.
No actual die temperature measurements? That would seem a lot more relevant than the ambient temperature.
Die temperature readings aren't particularly helpful these days with desktop parts that will (depending on the power management settings) more or less keep increasing the clock speed until they reach ~90°C and just stay there. Upgrading from a bad/undersized heatsink can easily have only a tiny effect on temperature but have the effect of significantly increasing clock speed and power.
but have the effect of significantly increasing clock speed and power.
Ironically, if these failures are due to excessive automatic overvolting like what happened with Intel's a year ago), worse cooling would cause the CPU to hit thermal limits and slow down before harmful voltages are reached. Conversely, giving the CPU great cooling will make it think it can go faster (and with more voltage) since it's still not at the limit, and it ends up going too far and killing itself.
Aren't they at least useful for ruling out any anomalies there? Like the die temp being 110°C constantly? Imho the die temperature is very important here, even if not interesting.
That looks like a combination of improperly mounting the heatsink and noctuna being wrong in their recommendation to offset it. I’d imagine for gaming cooling one side more makes sense but my completely uneducated guess is that GMP is working a different part of the CPU than gaming does.
They had failures with standard mounting and offset mounting.
Also, take a look at a delidded 9950; the two cpu chiplets are to one side, the i/o chiplet is in the middle, and the other side is a handful of passives. Offsetting the heatsink moves the center of the heatsink 7mm towards the chiplets (the socket is 40mm x 40mm), but there's still plenty of heatsink over the top of the i/o chiplet.
This article has some decent pictures of delidded processors https://www.tomshardware.com/pc-components/overclocking/deli...
This is what Zen5 looks like under the IHS: https://i.imgur.com/j85YUzX.jpeg
Everything is offset towards one side and the two CPU core clusters are way towards the edge, offset cooling makes sense regardless of usage.
I'd assume both GMP and any CPU intensive game just prefer the performance cores.
AMDs desktop chips don't have distinct P and E cores, they're all P cores. AMD do have an E core design but it's currently only used in mobile and server parts.
Gotcha. Apparently Intel's marketing's gotten to me. I haven't really been keeping up with this stuff, so whenever I read about P & E cores in the past, I think I just assumed that was a thing both Intel & AMD were doing, without considering the source material too closely.
AMD has definitely been moving in that direction, and arguably doing a better job of it than Intel. But for now, AMD's desktop parts are still built with the same CPU core chiplets as their server parts, and none of the server parts are using heterogenous cores yet (from AMD or Intel). At some point AMD could theoretically build a desktop processor from one Zen chiplet and one Zen-c chiplet, but there hasn't been a good reason to do that yet.
I wonder if the risk is mitigated if you turn off PBO and turn on Eco Mode?
I noticed the comments pointing out that TDP is a marketing number, and max power draw for this part can be higher. The cooling seems to have been inadequate.
A rule of thumb I use for cooling is, you can rarely have too much. You should over-engineer that aspect of your systems. That and the power supply.
I have a 7950x, with a water block capable of sinking up to 300W. Under heavy load, I hear the radiator fans spinning up, and I see the cpu temp hover around 90-93 C. That is ok, though cooler would be better. My next build (this one is 2 years old) will also use a water block, but with a higher flow rate, and a better radiator system. I like silent systems, though I don't like the magic smoke being released from components.
All other potential causes aside, including the likely most-relevant of motherboard companies exceeding recommended defaults for power delivery: Running a cooling solution good for less than the TDP (which is NOT the max power, which tends to be about 30% higher than the TDP on these) is frankly extremely dumb. I've seen x950 processors of every generation pull at least double that on extreme workloads. I think it speaks to them being a bit clueless that they did not manually lower the thermal limits. You can cut the power and thermal limits by wild amounts and barely lose 15% multicore performance.
What is gmp?
> What is GMP?
> The GNU Multiple Precision Arithmetic Library
> GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface.
Many languages use it to implement long integers. Under the hood, they just call GMP.
IIUC the problem is related to the test suit, that is probably very handy if you ever want to fry an egg on top of your micro.
Valid question i think in this context. I knew about GNU multiprecision library, but thought that couldnt be it, as it's "just" a highly optimized low level bit fiddling lib (at least thats my expectation without looking into the source), so it's strange why it could be damaging Hardware ...
The domain has the answer: https://gmplib.org/
At first I thought it was Green Mountain Power ;)
One day I’ll understand why some websites refuse to have a way of navigating to the home page. I had to edit the URL in the address bar.
I just wanted to find out what GMP is.
arbitrary-precision/bignum library