Hacker News new | past | comments | ask | show | jobs | submit login

You're comparing apples and oranges to an extent - the M1/M2 delivery like.... 5x the performance per watt of x86 designs.

Even the E-cores draw more power than the M1/M2 cores.




Not really. You can always cut down power consumption by running them slower GHz. Among other tricks. As far as I'm aware, AMD EPYC continues to win performance/watt crown in practical server applications (servers also cut down on GHz to try to keep things power-efficient. Better to have many cores at lower GHz to win power-efficiency races).

The interesting thing to me is die-area. Because that's what determines how many cores you get per chip.


> Not really. You can always cut down power consumption by running them slower GHz

But that ruins the original argument... Not that any amount of down clocking can let Intel or AMD get close on the perf/power graph. Besides heat is the main bottleneck on data center density and operational cost.

This is the denialist argument I keep seeing where people want to have their cake and eat it (or perpetually pin their hopes on Zen+1 that hasn't actually shipped).

If Intel could match M1/M2 power/perf they'd do it. But they can't. They can win the perf crown by absolutely burning power or they can get massively trounced to kinda get close on power. Zen is better but still has the same fundamental trade off.

Instead of saying M1/M2 are nothing special I'm super excited to see actual competition in the CPU space for the first time in a long time - competition that has proven a lot of conventional wisdom to be bunk.


> This is the denialist argument

??? My argument is, look at server power/performance. AMD EPYC takes the crown. 80-core ARM does not.

Maybe M1 will win, but we've at best got like 8 cores right now. As I stated earlier: M1 cores are huge. I'm not convinced they're better yet, but if Apple wants to make a 32-core M1 or M2 and compare it to an AMD EPYC 64x2 computer, that's when I'll start looking. We will see what they can do moving forward, but I don't expect that this M1 core can scale to a manycore size like EPYC (or Xeon, to a lesser extent).

Benchmarks are always crap, but they're the best we've got. Benchmarks, for now, show that Dual EPYC still is your best computer. At least for the server-scale that Supermicro operates at.

-------

ARM themselves are keen on this. ARM has V, N, and E cores moving forward because they're heading their bets. This Supermicro system is probably an N2 or N1 system? (I haven't looked into it much).

No one else in the world is making cores as large as Apple's M1. Its an aberration, abnormally huge. If Apple fans find it useful then cool but there's other workloads out here. I'm not fully convinced that such a large core like M1 is the best design.


Actually, Intel beats M1/M2 in perf/watt benchmarks: https://www.cpubenchmark.net/power_performance.html


Very weird measure of efficiency. Perf/(Max TDP) doesn't really measure efficiency. Running a workload you care about and measure power usage at the wall would be a MUCH better metric.

The Intel chip might well spend most of it's time at Max TDP and throttling while the M1 might require unusual circumstances (like maxing out the fast cores, slow cores, matrix multiply (which is outside the core), memory controller, and iGPU simultaneously. There's not really any way to tell from the posted benchmark.

Wonder if the i7-1250 runs in any laptops without a fan.


Also, TDP is a spec point with a lot of BS. Intel's TDP is widely mocked for being far under the actual maximum power draw of the chip. I don't know how Apple is working out their TDP but it's almost certainly different to Intel. But then if you have accurate numbers it's still not a great measure because cores are at their least efficient when maxed out (like desktop CPUs usually are and laptop CPUs usually aren't), so it's a comparison which is unfair to intel and you should really compare the real power consumption of an intel CPU when clocked to get equivalent performance to an M1. So the data is basically useless as a comparison.


But is it real competition at the μarch level or just a more advanced fabrication process being used? We have yet to see x86 chips on the same process as the Apple M1 or M2.


> ...die area. Because that's what determines how many cores you get per chip

the process (3nm, ...) too, to get a sense of what fits in an area


>Even the E-cores draw more power than the M1/M2 cores. To be clear, this is also a product of the fabrication process and node size. Intel 12/13th gen are on Intel 7[1] - which is fairly old at this point, while M1s are fabricated on 5mm process[2]. Smaller lithography size isn't the whole story, but it certainly does make a difference

[1]https://www.anandtech.com/show/16823/intel-accelerated-offen... [2] https://wccftech.com/apple-5nm-m1-chip-for-arm-macs-2x-perfo...


To be fair: Intel 10 years ago would always lean on its manufacturing advantage for why its chips were better than everyone else's.

Today, TSMC has taken the lead, and Apple pays the big bucks to be a prioritized customer above all others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: