The article (or Intel) do not disclose up to how many cores that new architecture is designed for, and I am certain Intel would say something like "With our P-, E-, LE-cores designed architecture(tm) the core count does matter anymore".
Also the SOC with built-in AI engine. Oh boy, I wonder how long it will take for AI-assisted malware, or botnets to emerge. Exciting times!
They are using off the shelf cores that have to be good in everything from netbooks and industrial boxes to server workloads. Apple, meanwhile, is laser targeting high volume, premium, media heavy laptop-ish TDPs and workloads. And they can afford to burn a ton of money on die area, a bleeding edge low power process, and target modest clockspeeds like no one else can.
this is such a weak argument. just because it's not in a laptop does not mean that a CPU should be accepted as being a horrible waste of electricity. making datacenters as efficient as laptops would not be a bad thing. i'm sure people operating at the scale of AWS and other cloud providers would be beyond happy to see their power bills drop for no loss in performance. i'm guessing their stockholders would be pleased as well.
Datacenters are actually exactly as efficient as laptops.
They consume more only because they do not stay idle, like laptops.
The CPU cores in the biggest server CPUs consume only 2.5 W to 3 W per core at maximum load, which is similar or less than what an Apple core consumes.
The big Apple cores are able to do more work per clock cycle, while having similar clock frequencies and power consumption to the server cores, but that is due almost only to using a newer manufacturing process (otherwise they would do more work while consuming proportionally more power).
The ability of the Apple CPU cores to do more work per clock cycle than anything else is very useful in laptops and smartphones, but it would be undesirable in server CPUs.
Server CPUs can do more work per clock cycle by just adding more cores. Increasing the work done per clock cycle in a single core, after a certain threshold, increases the area more than the performance, which diminishes the number of cores that could be used in a server CPU, diminishing the total performance per socket.
It is likely that the big Apple cores are too big for a server CPU, even if they may be optimal for their intended purpose, so without the advantage of a superior manufacturing process they might be less appropriate for a server CPU than cores like Neoverse N2 or Neoverse V2.
Obviously, Apple could have designed a core optimized for servers, but they do not have any reason to do such a thing, which is why the Nuvia team has split from them, but they were not able to pursue their dream and then they went back to designing mobile CPUs at Qualcomm.
> i'm sure people operating at the scale of AWS and other cloud providers would be beyond happy to see their power bills drop for no loss in performance
- The datacenter CPUs are not as bad as you'd think, as they operate at a fairly low clock compared to the obscenely clocked desktop/laptop CPUs. Tons of their power is burnt on IO and stuff other than the cores.
- Hence operating more Apple-like "lower power" nodes instead of fewer higher clocked nodes comes with more overhead from each node, negating much of the power saving.
- But also, beyond that point... they do not care. They are maximizing TCO and node density, not power efficiency, in spite of what they may publicly say. This goes double for the datacenter GPUs, which operate in hilariously inefficient 600W power bands.
It's all tradeoffs. Desktop users are happy for 20% more performance at 2x power draw - and they get the fastest processors in existence (at single thread) as a result.
Data centres want whatever gets them the most compute per dollar spent - if a GPU costs 20k you bet they want it running at max power, but if it's a 1k CPU then suddenly efficiency is more important.
Also the SOC with built-in AI engine. Oh boy, I wonder how long it will take for AI-assisted malware, or botnets to emerge. Exciting times!