Hacker News new | past | comments | ask | show | jobs | submit login

Faster, probably, but 2x+ Faster like in PowerPC > Intel very unlikely. Which is why there’s much less likely to be enough performance budget for effective emulation.



Presumably their "desktop-grade" A-whatever chip will have a higher TDP and clock speed than their mobile counterparts.


If they could just turn up the TDP and clock speed on their chips and get 2x the performance of Intel's best chips that easily- they would already own the entire desktop market.


And some of us, with actual CPU design backgrounds, have been saying that for a while now.

We've been asking why Apple doesn't already do it.


As someone without a CPU design backround, would something like this scale pretty linearly with added power and thermal headroom? I assume there are limit that would have to be overcome, but what would an ARM chip in the conditions of an i7 look like?


TDP typically scales as somewhere between the cube and fourth power of the clock speed if you're pushing the envelope (in the sense of running at frequencies where further frequency increase also needs a voltage increase). So having 10x the thermal envelope means you can probably clock about twice as fast, all else being equal.


It's more of a logarithmic scale.

The same architecture can generally scale to 10x over a few process generations.


This does not hold at 10nm and below.


Seems that it’s more of a logistical and network-effect issue in getting everyone to support it, rather than a technical issue.

Possibly a patent issue also


My only question is if they actually use ARM for their next-generation architecture, or something completely new...


Computer architectures routinely see 3x performance jumps across different power budgets. This rule has held over decades.

Clock speeds alone can probably increase by 30%. Caches and internal datapaths can double or more. Then you can start to add in more execution units or more expensive branch-prediciton or even new more power-hungry instructions.

A 4 Watt Intel Pentium 4410Y Kaby Lake for mobile devices gets about 1800 on Geekbench, while a 115 Watt Intel Core i7-7700K Kaby Lake for desktops gets 5600.

I'm just going to say it: the Apple laptop CPU is going to get Geekbench score... above 9000!

And, yes, I do have a CPU design background.


So artificial benchmarks already do a very poor job of capturing performance. The apple laptop cpu does not exist. If it did exist it would likely suffer a very substantial performance hit if forced to emulate x86 software. So you are going to speculate on the meaningless benchmark numbers of an imaginary cpu that will take a wholly unknown hit if everyone does not rewrite everything why?


Artificial benchmarks do a great job of capturing performance, since they're more controlled and eliminate unnecessary variables.

Once you understand this, then you can understand how CPU designers work to predict future performance. CPU designers use artificial testbenches.


You making up numbers doesn't appear to be a useful endeavor.


I suspect if Apple designs a desktop CPU, performant x86 emulation will be a key design criteria. I know very little about CPU design, but I imagine it would be possible to have hardware optimisations for x86 emulation just like we have today for video codecs.

Or even further they could bake a "rosetta" into the chip's microcode and have their CPU natively support the x86 instruction set along with ARM or whatever they come up with.


Which is the previous gen and was sandbagged as the 50% increase in cores for coffee lake show.


If someone disagrees they should state why


Do you think they will use the same chip? In 2020?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: