Hacker News new | past | comments | ask | show | jobs | submit login
Intel Launches $699 Core I9-13900KS, the First 6 GHz CPU: Available Now (tomshardware.com)
124 points by rbanffy on Jan 12, 2023 | hide | past | favorite | 149 comments



300w PL1/PL2 limits compared to 253 on the 13900k. I know these chips are made for specific reasons - to win benchmark crowns - but both Intel and AMD are cranking power through the roof to get the smallest of marginal gains. Such a waste.


It isn't a waste. Think about how many Electron apps you can run simultaneously now!


2?


You both cracked me up :D


That's a bit optimistic isn't it?


I think this was a rather abnormal usage of the word simultaneous.


It allows you to scroll through kilobytes of Slack messages, instantly!


It is not a waste. These halo products help drive performance and efficiency improvements to mainstream products. With every generation, CPUs become more efficient when looking at performance per watt. Intel and AMD CPU's are more efficient than ever.


Sort of but also not really: performance per watt is not the complete measure of improvement, you need to look at "performance, per watt, per wasted watt": If your high performance CPU uses 75W just to stay powered on, then its almost certainly not an improvement over a slower CPU that burns through less energy just to power the cores at all.

For example, let's contrast the 13700k to the ancient 7920X. The 13700K benchmarks to 47106, with a TDP of 250, a performance per watt of 188. Compare that to a 7920X, which benchmarks to half that at 23607, with a TDP of 140W, a performance per watt that's less than 170. The 13700K is clearly an improvement if we stopped there!

Except we don't, because the wasted watts matter a lot: the 13700K needs 75W just to power its cores, whereas the 7920X needs 50W. Adjusting our performance per watt to performance per watt per wasted watt, we get 2.5 for the 13700K, but 3.4 for the 7920X. That old CPU is a lot better at turning energy into work.

The 13700K is unquestionably a higher performing CPU than the 7920X, and I doubt anyone would object to calling it a much, much better CPU, but it's very hard to--with a straight face--call the newer CPUs an improvement in terms of energy consumption. CPUs have gotten quite a bit worse =)


I'm not sure this is the right way to look at it.

If you take it to an extreme the flaw is apparent. Let's say "bogomips" is the name of a real world accurate benchmark.

If a CPU at full performance gives 100 bogomips at 2 watts and idles at 1 watt by your metric the score is 50.

On the other hand, if a CPU at full performance gives 200 bogomips at 2 watts and idles at a small fraction under 2 watts, your metric also gives a score of ~50.

It's obvious the 200 bogomips processor is way more efficient than the 100 bogomips processor. Something is missing.

I think both idle watts and TDP are somewhat irrelevant. Maybe it should be bogomips / actual watt draw (different from nominal TDP) at full speed. Assuming you can keep the processor busy. Not being able to keep the processor busy doesn't really reflect on processing efficiency. Except that it is better for the wasted watts to be as low as possible.

A true efficiency, like a true benchmark is elusive, because what is "normal use"? Somewhere between "no work, all waste" and "full use, maximum efficiency".


I’m not these metrics are getting to the root problem:

The 13900KS boosts to 6ghz at 320w.

The 13900K boosts to 5.8ghz at 253w.

That’s a 3.4% increase in clock speed at the cost of a 26.5% increase in power. The marginal power cost for the frequency increase is way out of line.


Also, only 2 of the 24 processors boost to 6 ghz.

  "but the extra 'S' in the name denotes that this is premium-binned silicon that hits 6 GHz on two cores — 200 MHz faster than the 12900K"


Easy(ish) solution. Just have two CPUs. Turn off the fast one when idle. This is what some architectures already do at various levels.


I do this in effect. For things that don't rely on high core counts and memory bandwidth, advanced CPU or GPU features, or complex environments, I use a $150 Chromebook.

These new Intels are desktop CPUs. They also have Performance and Efficiency cores. Ideally, they'd prioritize using E-cores, and only as many as needed to complete tasks within an acceptable period. In effect, though, they're not very smart, and you've got to get into overclocking and undervolting to get them into a state that resembles AMD's TDP-limited ECO Mode that provides 80% of the performance at 50% of the power.


The NVIDIA A100 is like 400W and being dumped by the truckload into datacenters as fast as the fabs can deliver them. I don't think a little "waste" in premium desktop CPUs that are going to struggle to sell more than a half million units is worth crying over.

For sure this is more fun to watch than another liquid cooling overclocking rig.


Yeah also this CPU user won't utilize it at 100% on 24/7 so it's same efficiency as non-K for most of time.


The average GPU utilization in AI clusters is nowhere near 100%. More like 15-20% outside of hyperscalers being generous and maybe mid 30% for the hyperscalers for periods of heavy activity.

There is way too much gated on data movement at this point. Sure, CXL or other interconnect schemes may help but there are whole-system challenges for scheduling and dependency management and staging that drive these numbers.


DC's are paying $11k apiece for the A100 do ya think?


No one pays retail


cries in climate change


My CPU is powered by nuclear energy


And a lot of datacenter CPUs are also on renewables that are helped by the kind of work done with these halo / flagship consumer products. After all, the power efficiency in datacenters is probably more important in aggregate given they're likely going to go into legacy power grids over time as hyper scalers switch generations (oftentimes due to greater power efficiency).

Some of the engine work done for drag racing cars did make its way into consumer cars and it's not like grandma and grandpa are going to be ever running these high end CPUs either.

Power efficiency gains in recent generations have been pretty incredible - somewhere around 95% aggregate performance with half the aggregate power consumption of these processors is really crazy.


It’s a bit of propaganda that the DCs are powered by renewables.

https://gerrymcgovern.com/data-centers-greenwashing-par-exce...


On the AMD side, using the ECO mode which drastically lowers the power limit barely has any effect on the performance. (I think except 65W mode on 7900X)

It really is pretty wasteful, but not much can be done I suppose.


I took a grad school processor design course. There was an equation for calculating the approximate power draw of a processor, and there was a f^4 or something in there, f being frequency. The 6 GHz clock speed has a lot to do with the power consumption.


No, it's linear with frequency, and proportional to the square of the voltage. Outside of the frequency "sweet spot" you have to crank up the voltage though....and leakage increases as well.

https://en.wikipedia.org/wiki/Processor_power_dissipation


Guess I'm misremembering


Agreed, but just shouting out the brand new AMD 7900 non-x. 12 cores with a 65w tdp is pretty cool, if I was making a power efficient workstation or itx build right now that's almost certainly what I would use.


The non-x products make it clear that the “x” products are just way out of line on the performance/power scale.


The non-X line just obsoleted the X line due to PBO, as long as they're priced lower.


There's a setting in BIOS (PBO) that you can toggle to make the non-X perform just like the X. They're almost the exact same part, probably just binned a little higher.


Just buy the X and set the TDP for 65w in the BIOS..


Not until a Startup will propose to sell you, a personal Workstation, that can double as your household warm water and central eating boiler... :-)


I recently started doing it winter time Folding@Home

Probably reduces life time


Turns out it isn’t even 300 watts, it is 320!


I couldn’t care less about power consumption at this couple of hundred W range. The difference is probably a couple of dollars a month with home computer usage.


Such a waste.

...said people about going to the moon.

...said people about organ transplants.

...said people about the horseless carriage.

...said people about computers.


> Intel recently demoed the chip hitting 6 GHz on two cores with standard off-the-shelf Corsair AIO water cooling

So only 2 cores. They forgot to say for how many microseconds can the chip sustain this speed.

And 400 A ? Per nanosecond or what ? Thank god it will mainly wait for I/O and has enough time to cool itself. /s


> 400 A ? Per nanosecond or what ?

Amps are already over time — 1 amp is 1 coulomb (6e18 electrons) per second.


I think the parent comment was saying "how long was 400a sustained for".


That is what they were asking in the sentence before.

They forgot to say for how many microseconds can the chip sustain this speed.


When we talk about current we are usually talking about the rate. So the question is: for how long can the chip sustain this level of current.


That was funny.


my i9-13900K can sustain 6.4ghz on 4 cores with just a Noctua air cooler for a minute or two before having to throttle down some due to heat.. And that is just a decent air cooler.

People have this weird view of modern chips. Yeah, they /can/ use a lot of power and produce a lot of heat.. Under terrible loads most people dont actually hit for any sustained period. even gaming, you arent going to be pegging all 24 cores.


What do you use it for?


I do a lot of hobby video editing and restoration. Got tired of some of the processing taking literal days on my old system so got this one. So much faster lol.


Nitpicking. And 400 A ? Per nanosecond or what ? Ampere are already coulomb per second.


It's okay for a unit to have multiple "per second"s in it. A/ns is simply a measure of how fast the current ramps up or down.


Sure, but that's besides the point. The article talks about the peak current, why would you want to talk about the current gradient here? Not that the current gradient can not be important, it will determine how quickly the processor can change the clock frequency, but this seems to just be the same confusion that often happens between energy and power.

EDIT: Just out of curiosity I looked up what voltage regulator slew rates for modern processors are, and it seems to be on the order of 100 A/µs, in case someone is interested.


It isn’t besides the point, it is like saying a car can do 250mph, but can only do it for 1 second before the tires explode.

A CPU that can hit 6ghz while pulling 400amps but can only do that for a few milliseconds before throttling itself is not particularly impressive.


Look at the original comment again, they address this very point.

They forgot to say for how many microseconds can the chip sustain this speed.

This is exactly what you are saying, they are asking how long can it run at 6 GHz before they have to throttle. Then they continue with the sentence I originally responded to.

And 400 A ? Per nanosecond or what ?

I see mainly two possibilities. The first one is that they made a typo and what they actually wanted to write is »And 400 A? For one nanosecond or what?« which would essentially be repeating the exact same question as in the sentence before, just in terms of current instead of clock frequency.

But this is not what they wrote, they wrote »And 400 A? Per nanosecond or what?« which leads to the second possibility and the one I consider more likely. They were confused about the unit, the equivalent of responding to »The average US citizen consumes 10 kW of power.« with the question »Per day? Per month? Per year? In its lifetime?« which obviously makes no sense. That would be an appropriate response to the statement »The average US citizen consumes 89 MWh of energy.«

Without the author telling us, we will never know for sure, but I find it more likely that they were confused about the unit than that they repeated the exact same questions twice and made a typo in the second one that just makes it look like they were confused about the unit.


400A (amps) is icc_max which is how much current the CPU allowed to pull when working at 100%.


These processors are getting ridiculous. It seems like you definitely need at least an AIO watercooler with a 360mm radiator for the i9 these days. I have the i7-13700k, and it immediately goes up to 100C with the stock boost/tvb settings. This is with a dual tower Dark Rock Pro 4 heatsink, that was more than adequate to cool an i9 9900K.


> It seems like you definitely need at least an AIO watercooler

There is another problem with watercooling, which is that once the water warms up (which it will under sustained loads) your CPU temperature will begin to climb.

Then it becomes a game of "do I have thermal mass enough to run my job before throttling", and often it can be the case that radiators are unable to effectively cool down the CPU; but the additional water becomes thermal capacity; sometimes giving the appearance of better cooling when in actuality your system will throttle after 10 minutes of work.

Your heatsink is one of the better ones, an NHD-15 might be better but only slightly, after which point: I don't think it can be expected to cool a CPU at a sustained load with any AIO system, even the triple fan ones.

I'm probably not communicating this terribly well, I went down this rabbit hole when I was watercooling my old GPU and CPU in a custom loop by EK, I ended up discovering that if I wanted to do sustained work (which I did) then a direct airflow system would have worked better, so I ripped out the loop and it did work better for sustained loads.


> sometimes giving the appearance of better cooling when in actuality your system will throttle after 10 minutes of work.

If it throttles after 10 minutes instead of after 20 seconds, that is in fact better cooling. The higher thermal mass is one of the major benefits of water cooling over air cooling for this reason.


It's a pretty useless quality for a gaming PC.


No game continue utilizing CPU at 100%


Back at home, I had my water-cooling fans just in front of the air-conditioner. Yeah, I know not the most efficient type of cooling but I definitively saw some difference. Before that, during a 6-8 hours continuous gaming episode, I could definitively feel my legs warming up (computer was below the desk) and game performance taking a bit of a hit.

I think the best setup would be ice cylinders. You put 3-4 packs of ice cylinders that hold good amount of ice in the fridge. Then you plug them into your watercooling loop to cool the water. Probably someone is working (or did work) on a setup like that.


The top overclocker right now isn’t even using water cooling, he’s using The Ice Giant coolers which uses a thermosiphon with fans to cool the CPU. They’re chunky and can only work if your motherboard is sideways or flat on the ground (otherwise it messes up the tech), but they’re really excellent, with no risk of water damage like a loop would have.


Basically it sounds like your water cooling system was undersized. You didn’t have a good method to cool the water, so you just built up heat in your working fluid.

The air did a better job because it was open. So you average initial temp was higher but could be sustained, because you always had cool air.

If you had a radiator with a fan it’s like the water solution would have still worked better.


i understand how you can come to that conclusion, but it was really oversized for what it was cooling.

2x 240mm rads

1x pump

1x 80mm reservoir

the gpu was a AMD firepro K2000 (not a epic GPU by any standards)

the cpu was a i7-4930k, which is pretty paltry by todays cooling standards.


It sounds like it subjectively _felt_ oversized, but objectively, it was undersized


Meanwhile I have an i5-9600k in a case that is smaller than a Playstation 5 using a cooler that is only a bit bigger than those old stock coolers. It comfortably idles around 40c and peaks at 70c.

What are people even doing on these newest CPUs? Video rendering? Servers? Because if they are playing games it is a huge waste of power.


Not sure how this generalises, but for video editing - less rendering, which is already fast for most workloads, but the actual tasks of editing, grading etc - CPUs are still nowhere near fast enough. You can easily bring a modern machine to its knees realtime editing 4K or 6K H264 / H265 video with grading at native resolution. Let alone 8K video. Dedicated pipelines for decompressing various video codecs on the fly (including ProRes) are part of the reason Macbook Pros sell so well for video tasks. Another is that you can run them unplugged without slowdown, which is incredibly useful for video editors (good luck editing on a train, plane or back seat of a cab with a windows laptop).

All that said, DaVinci Resolve which is increasingly taking over from Premiere for many commercial editors, is more GPU than CPU bound. Lots of sales of the current fastest GPU Nvidia's RTX 4090 to this market.


If you ran a newer CPU at the effective performance of a 9600k you could probably get away with a passive cooler.

A last-gen i5 mobile processor outperforms at 9600k with a 45W TDP.

People make a ton of noise about the efficiency of processors that are explicitly designed to be the hotrods of desktop computing: meanwhile we've also seen the bottom end get significantly faster over the years. Conversely, the floor for TDPs have barely risen.


Why is rendering video (likely for entertainment) not a waste of power, where playing video games (entertainment) is?


Ah, that is not what I meant. Such a powerful CPU is a "waste" in a computer meant for gaming because it can never be utilized. It will never be the bottleneck of such a system and you can run your games just as well on a less powerful CPU.

The energy used for such activities is another question.


It depends on the game, settings the user is running it at, and desired framerate and resolution. A number of games are partially CPU-bound and not well multithreaded, which means the only way to get them consistently at or above certain framerates is to pump IPC, which usually means buying the higher end CPUs.


Are there any games out right now that an i5-9600k would struggle with?


DCS[0] in multiplayer. Horribly single-threaded at the moment, no CPU on the planet is good enough for a really busy map, running a bunch of detailed flight and systems models. They’re working on it, but they’ve been working on it for years. Typical small market development constraints.

[0] https://www.digitalcombatsimulator.com/


Yes, definitely. Many FPS gamers as a matter of visual advantage wish to run their FPS game at the highest possible framerate, or at least as fast as their display. This usually means 144-300+ fps. And it needs to be pegged at a consistent rate (so even in the most graphically intense moments). Achieving this even on low settings and with a high end GPU is not always possible, and a high end CPU and high end RAM are required to be consistent, especially for 1440p and 4k.


There are some that won't consistently stay above certain framerates in some circumstances. World of Warcraft for instance performs tangibly better on newer CPUs in settings like 40v40 team fights in battlegrounds for example due to how CPU-bound it is — on a 9600k it'll struggle to stay above 120FPS and occasionally dip below 60FPS.


Cities Skylines once you get to a fairly large city size. Arma 3 can also get extremely cpu heavy depending on the map.


I was using about 60% of my cpu running Cyberpunk 2077 at 1440p 120fps. Maybe if you wanted to run it at 8k using an RTX 4090.


My take is this:

If I see 90%+ CPU load (being "efficient"), I need a bigger CPU.

If I hardly see any CPU load, I am being effective.

Ergo, the biggest CPU I can afford is the best choice.

I don't consider unused CPU power as waste, I see it as relief from worrying about capping out.


90%+ CPU load at... what frequency? That rule doesn't make a lot of sense unless you are meaning "90%+ load while sustaining maximum core clocks"


At whatever maximum frequency.

Let me put it a different way: If I'm playing a game and I'm barely using 20~40% CPU, I'm doing good. That 60~80% unused CPU processing headroom is my ability to be worryfree about capping out.

Which is to say: Yes, my 12700K is massive bloody overkill for my day to day activities, but it is 300% adequate for my needs and concerns and I love it. I'll probably upgrade it to a 13700K or 13900K(S) depending on the price a year or two down, since my mobo can accept it (thanks Intel!).


Modern systems are going to ramp the clock speed as utilization increases, if you're seeing sustained 90% cpu, chances are you're running pretty close to the sustained clock limit.

Edit: I accept your findings, and will pay closer attention to see if I can see the same.


I would have agreed with you on this if it weren’t for the fact I have noticed a lot of newer systems (granted, running windows) having sustained 80-90% loads and when I got to investigate they are several steps from max frequencies.

These have been 7th gen and higher i5/i7 machines and newer J4xxx series celerons.

Haven’t had any AMD kit to compare though.


And I should clarify that they haven’t been thermally throttled.


Aah. Kk thanks for reply. I'm a dev so I like having those ghz around for other uses as well.


Some games are still single-thread CPU bound, eg. KSP (even after the multithreading enhancements were implemented my understanding and anecdotal sense is most of the bottlenecking math is still sequential in one thread).


I have a i9-13900K with just a noctua air cooler and itll happily churn along with cores loaded at 5.5-5.8ghz on the performance ones and whatever peak is on the efficiencies (I have forgotten off the top of my head) and it sits around 89C.

so... don't get a top end model or undervolt/underclock it if you care that much?

I don't get all the weird complaining about /halo products/ and /highest end models/ potentially using a lot of power and producing a lot of heat.

It's like, OK? and?


I use a Mac Studio with an M1 Ultra and I have never even heard the fan. I don't miss the white noise at all. I routinely use all 20 cores too.


The Ultra Studio doesn't make sense to me because it costs as much as two Max Studios, which together will outperform one Ultra, and the difference isn't negligible.


I love people saying its a waste and too much power blahblah.

It isn't going to pull /320w/ all the time. It will periodically, under certain loads, briefly.

My i9-13900K happily slams into 6.4ghz across multiple cores when doing things. And power usage rarely actually hits ~230w doing that. I only ever see absurd power usage under really heavy work loads. Most of the time, itll hit its peaks on a couple of cores when I am doing something and thats that. I would rather slightly higher peak-y power usage for a more performance.


Does power usage really matter for a desktop computer? This power efficiency thing/environmental concern is really a gimmick. There are other industries/things that have way more damage than your computer CPU. Your old car included.


Some apps tend to max out the cpu. The amount of times I'm just doing email and my laptop cpu spins up with all 8 cores maxed out due to a runaway process like mcafee virus scan or something similar is not fun. So I'd assume they could pull a few hundred Watts here too if allowed. Ended up switching to a mac, and now it's office 365 that regularly maxes out my cpu (and Window_server along with it).


Right? There's a reason supercars/hypercars exist and not every car is a boring tesla.


This is great for the price. My main complaint is power consumption. Intel is still delivering wonderful performance, but the performance per watt isn’t great at all.


Is there a reason why people are so stressed over high power consumption in new CPUs? The majority of the time it wont use anywhere near that and I just accept it as a “stock OC”.


I think the concern is scalability. A very high performance per watt is a signal that it the processor design will be difficult to scale practically.

Beyond that, at scale, is it good to have high-consumption computers all over the world before we have widespread renewable energy? Probably not ideal.


They are usually sipping power is my point. The high boosts being possible is a bonus if you need it but won’t hurt if you don’t. It’s the best of both worlds.

As far as energy use we’re it’s such a tiny amount of energy compared to any given appliance.


For me it's about how much heat they put out. I have to crack a window I'm winter when gaming.

I think others are concerned with a trend that extrapolates to something insane power consumption where power use grows exponentially to megawatts where clock speed is growing asompotically to say 10ghz


> For me it's about how much heat they put out. I have to crack a window I'm winter when gaming.

Back in the Amiga era, I inverted the PSU fan of a very loaded 2000 (lots of memory, SCSI, Video Toaster, etc) and piped in the output of an air-conditioner directly to it (with a little bit of epoxy sculpting). It stopped crashing and worked happy for a long time until being replaced by a boring Windows box running NT.


You never had condensation issues?


I had, but only in front of the computer, where the cold air hit the room air. The air from that AC unit was very dry - to the point of similar ones being replaced due to complains.


When I’m gaming my cpu (11700k) puts out about 50 watts (some cores moderately busy). The gpu (3080) puts out 320 watts!!!

Tbh I like it in winter, warms my living room up nicely. In summer I just run my aircons more, but I also usually game less because the weather is nice.


the alternative is lower performance for those who do want it. You can very easily undervolt a cpu if the heat is genuinely uncomfortable.


I can't agree with you. The CPU will try to boost any time you are doing work, and these models are overclocked to a max at the expense of an insane power consumption overhead. My previous Intel laptop was getting spikes of 50+ watts for things like rendering a website or opening files and I can only imagine how bad it is with current desktop models. If it would be the case that the CPU enters it's maximal turbo once the system has detected that you are doing sustained, demanding work — you would be right, but all of these dynamic overclocking systems are opportunistic. As someone mentioned above, they are designed to win benchmarks.


> is it good to have high-consumption computers all over the world before we have widespread renewable energy?

I doubt too many computers based on these parts will be running that hot for any significant time. It's much more likely to experience bursty loads - compiling something big, crunching a massive amount of data - than sustained ones, such as a build server or something similar.

If the CPU is managing GPU-based number crunching, keeping it busy will require a lot of GPUs, which will make the CPU's thermal output a rounding error next to the rest of the data furnace.


power = heat = noise.

I don't like noise.


Yes but either you need your computer to work and therefore you are willing to accept the fact that energy=work, or you don't need the computer to do anything in which case it will just sit there silently drawing ~1W.


Or you choose a CPU that does more work per unit of energy because you value silence


I really wish perf per watt would catch on as the metric to chase for enthusiast/prosumer-oriented desktop CPUs. Or be advertised at all, really.


Perf per watt isn’t interesting to users who want the job finished. Work done per joule is extremely interesting but Intel is near the top of the game on that metric. Many CPUs that are viewed as low power are not energy efficient.

On an energy basis the “performance” cores in an Intel CPU are more efficient than the “efficiency” cores.


They already are though. The currently shipping Intel and AMD chips perform similarly at 65W.


TDP and actual power draw are two VERY different things. Intel and AMD both will plaster 95W or 125W or 160W TDPs on chips that will draw well over 250W when boosting as long as your cooling allows for it.


Your second case is rather...optimistic given the amount of background work any modern OS does.

My i9 desktop will start to spin the fans up with light web browsing.

To be precise, even at true zero load, an i9-10850k (what I've got) draws 25-30w. Any sort of load at all and it gets to 60+w. A gaming type load is well over 125w.


If you are measuring that correctly it means that your operating system or something about your platform is utterly broken. `turbostat` and `powertop` on my i7-13700K indicate that it is currently, at "true zero load", drawing 1.1W


Not my measures… measurements by people who run hardware websites.

i9 is much hotter than i7, especially the high end ones. When I bought the cpu the only thing higher (without getting into server grade stuff) was the 10900k which was basically unobtanium.


Maybe you should choose a better operating system?


No. That's just the modern web. If a person needs to use the modern web, he/she needs to use a browser. If he/she needs to use a browser, it is almost always going to be Chrome (or something built on Chrome). Chrome is heavy on Linux, BSD, and macOS as are many of the pages it loads. The web can spin up fans on Intel, AMD, and ARM when I am running NetBSD, Slackware, or Intel Clear Linux.


Put a CPU limit on the process. I do it for gaming to keep Dwarf Fortress from pinning my CPU even in the menus.


That’s more related to fan choice (Noctua is the pro move) but it’s still going to be idling most of the time.


In a closed room heat isn't down to fan, that heat may not be in the case but it's not getting pumped out of my room quickly. I'm basically running a 900 watt heater whenever I'm doing hangouts or gaming


But the new chips are already shipping with an “ECO mode” for small-form factor computers. That just makes it easy though, undervolting was already popular.

The alternative is shipping a power constrained for…no gain? Letting the cpu boost this high is only a boon for people who want it and is an imagined negative.


Windows nor call of duty seem to care not about those settings. Happy to run at full blast all the time. And then there's the 3090.


I have an NH-D15, it's still extremely audible in a quiet room.


I'm generally most interested in performance at peak, and consumption at idle, since my usage typically bounces between those two extremes and spends little time in-between.


After owning an i9 for a while... I care about power usage because all the power gets turned into heat. i9s like to run HOT, even under fairly mild loads.


CPUs with high power and high performance gets binned into mobile and iot sku.

But those i9 ks sku can achieve a couple hundred extra MHz. But have a much higher base clock speed.


I believe you can set the power limit in the BIOS, so you can get most of the performance without so much heat.


MHz wars all over again.

This marketing "optimization" usually leads to cycle inefficiency and excessively deep pipelines, causing expensive pipeline stalls should a branch not be predicted correctly.

https://en.wikipedia.org/wiki/Megahertz_myth


Is the 6GHz burst mode faster per MH, per core compared to, say an AMD Athlon T-Bird @ 1.4GHz.


Interesting. I had a copper water block on a T-Bird.


Passmark says they're #1 in single thread : https://www.cpubenchmark.net/singleThread.html


At 6GHz, that should be a given.


6 GHz boost, not base.


Looking at that chart, it seems to me that the 7950 is still a much better option than this, even though it has a marginally lower boost clock. AMD is still killing Intel on perf/watt.


pretty impressive that FinFET has finally surpassed planar transistor clocks... just in time for the change to GAAFET! ;)

but seriously I heard arguments for a long time that bulldozer would probably be the highest-clocking LN2 CPU ever built, because of the combination of planar transistors and a frequency-optimized architecture. It seems like we stalled around 5 GHz forever with FinFET, and then 10ESF/Intel 7 just casually blows most of the way to 6 GHz in pretty much a single generation.


With that power consumption it looks just like an overclocked CPU.


that's what it is. KS suffix is golden-sample/top-binned silicon running at higher-than-normal clocks.

To be fair these chips will also be amazingly efficient at more modest clocks, because they're top-binned, but by default they're just pushing for max clocks above all else.

For some fun compare the 9900K to the 9900KS on SiliconLottery's historical statistics page... 95% of 9900KS will do 5 GHz all-core at 1.25v, vs only 30% of 9900K CPUs making 5 GHz at 1.3v, for example. Or top 5% of 9900K chips will do 5.1 GHz at 1.312v vs top 28% of 9900KS doing 5.1 GHz at 1.287v. Way way better silicon.

https://siliconlottery.com/pages/statistics


As someone who only needs raw CPU single-thread power when waiting for an executable to link (while using a stock linker for rust and taking a release build takes minutes, which is not much time but also a lot of time), I wonder whether I should go with the new M2 Max's or build a desktop computer (%99 of the time, I will be sitting my desk) with these shiny CPUs to get the most single-thread power (the parallelization does not really help in my workloads [esp compiler stuff, since everything is pretty much incremental and once I take the first build then it is only linking]).


For absolute maximal single-threaded power current crop of Intel desktop CPUs are probably the best. But they pay tremendous price in power usage for this. A passively cooled M2 will get you around 80-90% of that single-threaded performance at 1/10 power consumption or so.


You should definitely check out Mold:

https://github.com/rui314/mold


Why not switch to a faster linker if linking is your bottleneck? Mold will use all your cores. And even LLD will likely be much faster than the stock linker.

I don’t think there’s much in it between Apple, Intel, and AMD with regards to single threaded performance at the moment, so I’d probably make your decision based on other factors.


Given how much of computing is memory bound in loads that exceed cache, I appreciate AMD's efforts to expand L3 more than Intel's to increase clock speed.


Tomshardware also says the KS is 10% faster than the K model.

Processor Single-Core Score Multi-Core Score

Core i9-13900KS 2,319 26,774

Core i9-13900K 2,227 24,311

Ryzen 9 7950X 2,192 22,963

Core i9-12900KS 2,081 19,075

Core i9-12900K 1,988 17,324


A $110 premium seems like a steal for being able to get the very top binned CPUs


It doesn't seem bad for the price but it's only 3% higher clock than the i9-K and 9% faster than the significantly cheaper i7-K


320W actual thermal, so figure on spending at least $200 on a good quality and LARGE RADIATOR liquid cooling system as well.

$699 + estimated $350-400 motherboard + $200 cooling = 1249 or somewhere like that.


Do any review sites do multiple benchmarks at more than a few different power limits? I'd love to see a graph of power limit vs benchmark result for any given CPU.



Any suggestions for someone who's serious about cooling stupidly-hot components like these and doesn't mind plumbing new household infrastructure to do it?


These CPUs should be sold with EnergyGuide labels on them at this point given the power usage discrepancies


Does it have a water cooler attached to it behind the stage?


First 6GHz (off the shelf) CPU.


Prescott 2: Electric Boogaloo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: