Hacker News new | past | comments | ask | show | jobs | submit login

It is not a waste. These halo products help drive performance and efficiency improvements to mainstream products. With every generation, CPUs become more efficient when looking at performance per watt. Intel and AMD CPU's are more efficient than ever.



Sort of but also not really: performance per watt is not the complete measure of improvement, you need to look at "performance, per watt, per wasted watt": If your high performance CPU uses 75W just to stay powered on, then its almost certainly not an improvement over a slower CPU that burns through less energy just to power the cores at all.

For example, let's contrast the 13700k to the ancient 7920X. The 13700K benchmarks to 47106, with a TDP of 250, a performance per watt of 188. Compare that to a 7920X, which benchmarks to half that at 23607, with a TDP of 140W, a performance per watt that's less than 170. The 13700K is clearly an improvement if we stopped there!

Except we don't, because the wasted watts matter a lot: the 13700K needs 75W just to power its cores, whereas the 7920X needs 50W. Adjusting our performance per watt to performance per watt per wasted watt, we get 2.5 for the 13700K, but 3.4 for the 7920X. That old CPU is a lot better at turning energy into work.

The 13700K is unquestionably a higher performing CPU than the 7920X, and I doubt anyone would object to calling it a much, much better CPU, but it's very hard to--with a straight face--call the newer CPUs an improvement in terms of energy consumption. CPUs have gotten quite a bit worse =)


I'm not sure this is the right way to look at it.

If you take it to an extreme the flaw is apparent. Let's say "bogomips" is the name of a real world accurate benchmark.

If a CPU at full performance gives 100 bogomips at 2 watts and idles at 1 watt by your metric the score is 50.

On the other hand, if a CPU at full performance gives 200 bogomips at 2 watts and idles at a small fraction under 2 watts, your metric also gives a score of ~50.

It's obvious the 200 bogomips processor is way more efficient than the 100 bogomips processor. Something is missing.

I think both idle watts and TDP are somewhat irrelevant. Maybe it should be bogomips / actual watt draw (different from nominal TDP) at full speed. Assuming you can keep the processor busy. Not being able to keep the processor busy doesn't really reflect on processing efficiency. Except that it is better for the wasted watts to be as low as possible.

A true efficiency, like a true benchmark is elusive, because what is "normal use"? Somewhere between "no work, all waste" and "full use, maximum efficiency".


I’m not these metrics are getting to the root problem:

The 13900KS boosts to 6ghz at 320w.

The 13900K boosts to 5.8ghz at 253w.

That’s a 3.4% increase in clock speed at the cost of a 26.5% increase in power. The marginal power cost for the frequency increase is way out of line.


Also, only 2 of the 24 processors boost to 6 ghz.

  "but the extra 'S' in the name denotes that this is premium-binned silicon that hits 6 GHz on two cores — 200 MHz faster than the 12900K"


Easy(ish) solution. Just have two CPUs. Turn off the fast one when idle. This is what some architectures already do at various levels.


I do this in effect. For things that don't rely on high core counts and memory bandwidth, advanced CPU or GPU features, or complex environments, I use a $150 Chromebook.

These new Intels are desktop CPUs. They also have Performance and Efficiency cores. Ideally, they'd prioritize using E-cores, and only as many as needed to complete tasks within an acceptable period. In effect, though, they're not very smart, and you've got to get into overclocking and undervolting to get them into a state that resembles AMD's TDP-limited ECO Mode that provides 80% of the performance at 50% of the power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: