Hacker News new | past | comments | ask | show | jobs | submit login
Arm Unveils Next-Gen Flagship Core: Cortex-X3 (wikichip.org)
143 points by rbanffy on June 29, 2022 | hide | past | favorite | 129 comments



Everytime CPU stuff is discussed it is quickly hijacked into praising Apple.

Can we have a discussion about this CPU instead of talking about others? M1/M2 are not even the current top performing CPUs...


When I first loaded this article your comment was on top and had no comments. It's still on top and the comments are talking about M1/M2. You applied the Streisand Effect to yourself.


Probably because the M1/M2 are kings of the roost as far as consumer hardware goes. No consumer ARM chip comes close, and Intel needs 3-4x more power draw just to match Apple on performance.

The M1 is the stick by which all new ARM chips are going to be measured for a while.


> Intel needs 3-4x more power draw just to match Apple on performance.

The M1 Max is arguably price competitive with Intel 12900K, at half the performance and 1/6 wattage (at full load).

The M1 Ultra is twice the price, closer in performance, for 1/3 wattage.

https://youtu.be/LFQ3LkVF5sM?t=99


I hate Apple for lots of things. But they have done exceptional work with M1/M2. Imagine that power efficiency on servers. Saving so much energy.


> Imagine that power efficiency on servers. Saving so much energy.

There's no magic in those chips. Power efficiency is a design parameter you may optimize for or not, and ARM server chips dissipate more power because they have a lot more cores, more memory controllers, more cache, more IO, and, in general, a lot more stuff than a personal computer CPU would make sense to have. Computation per computation, they are not far from each other.

Intel's x86 ISA mandates a more computation to happen per amount of useful computation served - lots of memory write reordering, lots of instruction alignment (instructions can have from 1 to 15 bytes, IIRC) and other operations that do not translate into anything useful for the user beyond a higher IPC if your instruction stream is just right, but, also dissipate a lot of power for that.


Dropping the M2 into the same entry level Macbook Pro they sold last year gains you a couple of hours of battery life as well as a performance increase.

https://www.tomsguide.com/opinion/macbook-pro-2022-battery-l...

That's pretty impressive for staying on the same process node.


Truly amazing, most people (including me) thought after M1 perf gains will be marginal


That battery life test is understating the advantage Apple has over x86, probably because it's idling too much. The gap in a more CPU intensive web browsing test is more like 2x more than competing x86 ultralights [1].

[1] https://www.notebookcheck.net/Apple-MacBook-Pro-13-2022-M2-L...


But is Apple ever going to allow their CPUs to be used for anything other than their own products? If it isn't even an option then it is mostly irrelevant. I can't get a Windows laptop with an Apple M2 so it isn't worth checking how efficient of a CPU it is compared to Intel and AMD.


Actually it’s not irrelevant. Lots of people choose between a Mac and Windows and the M chips up the competitive pressure on Intel and AMD to improve their offerings.


My workplace gave me a choice between an M1 Macbook or an Intel-based Lenovo. Even though I'd never used a Mac full-time before, it was a no-brainer and I picked the Macbook. It absolutely does matter.


This is the argument Apple fans used to make during the PowerPC G4 days – “it doesn’t matter that Intel makes faster CPUs, because I can’t buy a Mac with them, so the comparison is irrelevant and not worth looking at”. Except, over time, it did matter as sales bled off to Windows machines.

Yeah, the OS matters, but it isn’t the only thing that matters to most people (and with the web, the loss-of-applications cost of switching your OS is a lot lower than it used to be).


Apple has set a high benchmark. AMD, Intel, Qualcomm and Microsoft are under pressure and have plans for M1 competitor.

Lots of Windows users migrating to Mac for performance reasons


It is exceptional work, but the system-on-a-chip design is a form of compromise, especially with regard to the availability and pricing of RAM configurations. The current MacBooks ship with the same amount of RAM as my Android phone.


I feel Mobile CPUs are fast enough from last 5-6 years. Its the battery tech which is lagging.

On Laptop/Server side Qualcomm need to catch up to AMD/Apple.


Yes, they are so fast that I would be happy to use my phone as a computer if the industry didn't make it close to impossible


My gut tells me that if we could, there would be phones catching on fire all over the place.

Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.

I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.


> My gut tells me that if we could, there would be phones catching on fire all over the place.

If that was an issue you would see it already all around.

> Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.

I would rather compare it to laptops and processing power of snapdragon 8 is rather in mid-upper segment of average available on the market laptop

ARM is also a different platform with way better computing performance per Watt ratio from regular PC, there is throttling, but when it kicks in it doesn't reduce your performance so significantly

> I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.

Believe, or not - the reasons why it wasn't done so far are far beyond technical reasons and we could have computer like capable phones long time ago already.


try to use ffmpeg on termux and on a regular laptop. beyond some slight timing difference, my tests on a random file gave me similar results which is amazing


Running a simple gradle build was 10 to 20 times slower.


Can you be specific? What was the phone and what was hardware on desktop that you're comparing here?



Speaking of battery-tech and mobile. I wish I can get "today phones" but with a "thick option". i.e add 2-3mm in thickness for greater battery capacity ?


Apple has made the last few generations of iPhones thicker to do exactly that.


How much do you want? Motorola has a couple of 6000 mAh battery options, and I think even Samsung has a few 5500 mAh models.


After my experiences with Android I will never ever buy another device with a Qualcomm SoC again.


After my experiences with Android, it'll be really difficult to make me buy an Android phone.

Compared to iOS, everything is slightly clunkier.


I'll take "slightly clunkier" before "extremely locked down and limited in unthinkable ways" any day, and not just for Android, but in general.


I had totally different experience, One i bought with Exynos was Battery hog, Mediatek ones generally have bad performance.


Could you explain why?


I'm also curious what were your experiences with android that were affected by Qualcomm?


Maybe they had an 810 crap out on them


IMHO that's because Apple was the first to bother optimizing an ARM CPU for performance instead of battery life.

The performance per watt in ARM exceeded x86 since long ago, but most manufacturers used that feat to create mobile CPUs instead of cranking up the power to see if it catches up to high performance x86


I think reason is that Qualcomm need to sell to Phone makers so they can't make their SoC very expensive.

On the other hand Apple can make expensive SoC without worrying about sales


Apple has a big advantage over Microsoft in terms of building PCs.

Apple can decide that it is switching from Intel to ARM and not looking back so it not only realizes costs from investing in ARM but it ultimately will realize savings from not having to support Intel.

There is no real unity behind Microsoft, Dell, Lenovo and all the other vendors, plus the motto of Windows users is "who moved my cheese?" You don't use Windows because Windows is the best operating system, you use Windows because it has the most software, and that software is written for and optimized for the x86 platform. Microsoft is always floating ARM-based systems as a hobby but it cannot make the commitment that it is going "all in" so it has to support Intel as the primary processor for the foreseeable future, so the ARM transition costs Microsoft but never saves them anything.

On top of that Microsoft is dependent on vendors like NVIDIA and AMD to supply drivers for graphics cards and all the other parts for their machines and getting them all to supply good drivers for ARM is yet another big ask. Apple can say "here is the supported hardware" and not have to fight with vendors who have the power to torpedo their ambitions.


At minimun, Microsoft could try for real. In 2019 Samsung announced the Galaxy Book S with an ARM processor. Looked like a nice compact laptop. I would have been interested in it, but it was basically never available and limited to Windows for ARM. Which isn't even officially sold by Microsoft, you can run it fine on the new Macs in a VM, but only as a part of the beta test access. Even if Microsoft cannot force the whole market to ARM - and might not even desire to do so - at least they could treat Windows 11 for ARM as a full product.


If Microsoft wanted to really try an ARM push they could do worse than releasing a Surface laptop with amazing out-of-the-box Linux support; it might not sell a ton to Windows users, but they could get something moving.


Thank you for a rather enlightening perspective. That explains a lot of big tech behaviors around the x86/ARM frontier that I had failed to grasp.

I find it somewhat ironic that in wanting for a few CEO's to "Embrace… Extinguish" Linux in favor of Windows, Microsoft eventually created a world where Windows is a sink on their resources they yet can't axe because though not anymore a money-maker in its own right, Windows remains the basis for pretty much all their money-makers. In becoming a "software-first" company, they pretty much followed if not provoked the very paradigm that makes Windows a non-monetizable entity next to as-good-as-embedded MacOS and ever-free Linux.

Linux which, incidentally, is also THE basis for money-makers worldwide and the OSS model means nobody using it today had to fork even 1% of the real cumulative cost of that kernel + ecosystem since the 1990's. How's that for a win, Linux 1:0 Microsoft — that the winner did not even seek, busy doing its own thing, while the other pretty much dug its own commercial grave out of 'relevance debt' (not sure how to word what Windows has become)…

TL;DR: it's like this big item taking all your inventory yet that you can't ditch ever because it's required to do all your quests. In games, you wish devs forgot about that crap and just made the inventory that much smaller. In real life, Microsoft can't kill Windows without closing business overnight. sad_pikachu_face.jpg

[Note that IMHO, MS is incredibly strong nowadays, so they'll find a way, and this is a fantastic learning experience for the industry.]


They're the top performing Arm CPUs.


Maybe? There exist some extreme-performance ARM CPUs like A64FX from Fujitsu.


They are the top performing ARM computers you can go to a store and buy. I'd love to have an HPE Ampere or a POWER tower under my desk, but it's not easy to get anything like those, and certainly not for a price competitive with generic x86.


Are we confusing things?

They're both RISC, but P9/P10/etc is based on IBM's own Power microarchitecture (possibly open sourced / open standard), I wasn't aware it had anything to do with Arm? I could be hopelessly wrong...

Or do you mean something different with "POWER"?


Owning an IBM Power system is a daydream of many nerds apparently.

And power is open in that the ISA is open. Power10 is a bang up to date CPU so the internals are all closed.


There are other interesting ISAs that are not arm64. Sadly, the only ones that still could be viable desktop (more like deskside) are either server-grade ARM or IBM's POWER (and none of them makes Mac Minis).


I would be shocked if an A64FX core could go toe to toe with a Firestorm on general code. It just doesn't make sense for Fujitsu's use case to spend nearly as many gates on general improvements versus more vector ALUS.


The A64FX is a very specific microarchitecture that's aimed at big core counts rather than being a single threaded monster.


Top performance CPUs in whose laptops?

My alder lake is faster than an M1, my alder lake needs 4 fans


Discussing chip design in contrast to other chips is useful for discussion.


"Perhaps more critically, the new core offers exclusive support for only AArch64 – dropping 32-bit support altogether."

I don't understand why ARM wants to throw away AArch32/Thumb/Jazelle et al.

This really shouldn't be in a phone - a lot of Android apps bundle 32-bit binaries for various purposes, and they won't run on this CPU.

I can see why Fujitsu mandated AArch64, but my phone isn't going to be on the Top 500, and there were benefits to these old instruction sets.


Binary compatibility is no longer a huge concern. By now only very old programs will require arm32 support and most current offers on any app store will have been recompiled for the latest and greatest (because software writers also want to look better on newer CPUs). Plus, if the new hardware is that much faster, you may not even need to pack native blobs.


Of the Raspberry PIs, only the 3rd generation is capable of AArch64.

Because of this, very few Linux distros supporting the PI are 64-bit (Oracle Linux is the only one that comes to mind).

There does indeed remain a large AArch32 community that cannot move, and they will never, ever cross this gulf.


I highly doubt that ARM is making decisions for the Cortex X3 core design on the basis of the Raspberry Pi market


A new chip not supporting arm32 does not prevent software for being built for the platform. While there are enough users, the distros will be maintained.


Emulation ought to suffice for support of programs compiled for 32 bits.


Barely any updated Android app has 32 bit binaries. You can't even submit an app on the Play Store if there's no 64bit version since August 2019.


There are also benefits to cleaning up silicon. Eg: less transistors per core can mean more cores or less power or both.


If this were the case, then it would be better still to design a totally new instruction set.

Why cling to any remnant?


AArch64 is, effectively, a totally new instruction set. It was designed to allow for fully regular decoding, unlike AArch32 & Thumb, which is a big part of why getting those off of the chip is a big win – it's not just the decoders you clean up, you can now make your pipeline and speculation deeper because you can peek N instructions into the future without having to decode the N-1 instructions in front of it first to figure out where the Nth instruction lies. That's part of why you see AArch64-only designs like this and the A12+ cores with much larger reorder buffers.


You’re right but they are the only ones shipping in something that isn’t a turd.


> M1/M2 are not even the current top performing CPUs...

What are?


Cooler. Faster. This is the decade of low wattage CPU chips :)


Finally. CPUs have been fast enough for a while now, about time the industry starts pushing efficiency and battery life instead.


Not if you ask electron devs


Gotta admit that made me snort :-D


For them, memory volume and bandwidth are probably more important than single-thread performance.


emacs - electron making apps constantly swap


You will chuckle, but in 1989 it was "Eight Megabytes And Constantly Swapping" (just kidding Emacs, I love you, and what's 8 MiB today?) I've also seen Escape-Meta-Alt-Control-Shift (blew my mind).

Today Emacs is puny compared to The Browser.


Shiiiiet. :(


Missed opportunity to call it coretex


Core-Tex™


Just one letter off from introducing people searching for waterproof clothing to top-of-the-line CPU architecture.


ya ppl look for coffee and find programming languages or frameworks when they're looking for something else xD


8 High performance CoreTex cores and 2 low performance CoreMex cores


CoreChex - for that extra crunch!


Don't mess with Core-Tex.


Intel would file a “crash” lawsuit.


Pretty underwhelming considering the current delta between Apple’s A15 and the X2. I wonder if ARM is hampered by having to release a design that has to work on more than just the most cutting edge TSMC process.


Another way of saying this that I think is somewhat more accurate is that they're constrained by designing for an acceptable transistor budget, which sort of correlates to a particular node. IIRC, Apple's chips are _significantly_ larger in terms of transistor count than their competitor's chips. A design that uses an equivalent # of transistors - a very messy and inaccurate proxy for a requirement for perf parity - would likely price it out of the range of most phone manufacturers unless built on the newest process. Frankly, it may not be an acceptable cost even at the newest node - Apple seems willing to spend a bit more on their chips than most, and likely gets better prices in any case due to the size of their orders.


Another example might be this Cortex-X3 being the first in ARM's X line to drop the 32 bit ISA, where Apple had dropped that in the A11. That gave Apple several iterations to take advantage of whatever simplifications or reclaimed transistor budget came from dropping it.

Edit: Small miss above...Cortex-X2 dropped aarch32, though that's 2021 versus Apple doing it in 2017 with the A11, so the overall point is the same.


Cortex-X2 dropped A32 & T32.


Ah...yep, I was was wrong there. Though still much later than the Apple/A11...2017 vs 2021.

What the article says on this topic:

"The process of eliminating 32-bit and optimizing for the 64-bit ISA exclusively has been a 2-step process. With the Cortex-X2, the underlying circuitry used for handling 32-bit architectural-related elements was removed, saving on transistors and simplifying some structures. With the new Cortex-X3, the design team took the time to start optimizing specifically for AArch64."


Hasn't aarch32 always been optional in armv8? I mean, technically you can emulate it in software.

It was simply no market for solo aarch64 before, since a lot of costumers wanted the backup of having aarch32 compatibility.


> A design that uses an equivalent # of transistors - a very messy and inaccurate proxy for a requirement for perf parity

At some point it got twisted into "double the speed", but Moore's Law as originally formulated was precisely about this - doubling transistor count.


It wasn't really a matter of twisting. The phrase "Moore's Law" got coined in an interview at the same 1975 conference where Dennard presented on his scaling laws and has never referred just to the first periodic doubling Moore noted in his 1965 paper.


Back then, the transistor switching speed was 1:1 correlated with size and the sizes we're talking about then the size dominated (capacitance between gate & source). That decoupled in the late 90s / early 00s (if my recollection of news is correct) & kept getting more decoupled. These days you can have a switching speed of 604 Ghz but that doesn't translate to overall clock speed (dominated by critical path latency of your pipeline) + there's thermal issues.

Disclaimer: Not an EE / HW designer. Going off of memory from tech news + some university courses (info may be incorrect / stale).


It was more that leakage current (i.e. thermal issues) and velocity saturation suddenly becoming constraints on how much you could increase switching speed along with the difficulty of high frequency clock distribution. Your individual transistor switching speed is always going to be a lot faster than your clock speed because a pipeline stage has to be made up of many layers of transistors to get anything useful done (including latch the signal at the clock edge!) and each transistor will typically drive multiple others.

I was seriously considering being a chip engineer for a while and did my master's thesis on adder design.


What made you change your mind?


Realizing that I'd be a small cog in a large team for most of my career down that path.


> I wonder if ARM is hampered by having to release a design that has to work on more than just the most cutting edge TSMC process.

They're constrained by what their customers want. Yes it's nice to have the highest performing chip but plenty of space in the mobile market for less than best CPUs. Anyone buying arm CPUs doesn't have the A15 as an option so in a sense it's not a competitor. Someone selling premium phones is trying to out-compete apple but how much does the raw performance stats matter to the consumer markets?

I think there's also a tendency for everyone to want to do their own CPU now. Qualcomm and Samsung might want their arm micro-architecture to be the A15 killer and only buy the lower performance cores off arm for instance.


> I think there's also a tendency for everyone to want to do their own CPU now.

I mean... I think it's the opposite. There was a time when Qualcomm, Samsung and Nvidia had their own microarchs which they used in most things. Increasingly they're back to ARM-supplied designs now.

I suspect this is partly because the ARM designs are adequate these days, but also because phone CPUs just aren't that competitive anymore; the iPhone is enormously faster than any other phone, and consumers do not care about this at all, because the other phones are, generally, Good Enough.


> I mean... I think it's the opposite. There was a time when Qualcomm, Samsung and Nvidia had their own microarchs which they used in most things. Increasingly they're back to ARM-supplied designs now.

Indeed but I'm wondering if we'll see a swing back, if they're all selling SoCs with the same set of arm IP in varying configurations what's the differentiating factor? I suspect there's an aspect of 'arm never builds what we want' as arm needs to satisfy multiple people that would drive them to build their own design at the high end. That said I've only got a hunch and can't recall concrete evidence for this.


I think there is a lot of room for competition in the accelerator space e.g. GPUs, ML inference engines, video encode/decode.

It seems to me like modern core design is expensive and high risk/low reward for most of these companies. Remember, Apple has a ton of R&D and the margins to put massive cores into just about everything they sell, upto and including putting a CPU faster than most desktops into their "budget" monitor. Fabbing costs keep going up and the extra revenue from larger+faster CPUs would get gobbled up by TSMC and Samsung Fab. Setting out to build something faster at the same size and R&D is what eats your profit.

For Qualcomm, Samsung, Google (for Pixel at least), and NVidia, ARMs balance of performance/die size/efficiency is aligned well with their market positions and the opportunity cost of rolling their own is just too high even in the face of thinning margins from competition.


This has been a popular topic for some time in the computer architecture space.

“ The End of the Road for General Purpose Processors & the Future of Computing - Prof. John L. Hennessy”

https://www.csail.mit.edu/news/end-road-general-purpose-proc...


Qualcomm acquired Nuvia whose only product is a custom ARM CPU microarchitecture. I wonder what they'll do with it.


> I think there's also a tendency for everyone to want to do their own CPU now.

I wonder how many of these are actually vanity projects? Apple spent billions for more than a decade on ARM development before using their own design as a central processor.

OTOH MediaTek et all don’t worry about any of that.


> Apple spent billions for more than a decade on ARM development before using their own design as a central processor.

For their computer platforms. Apple has been using ARM since the Newton.

Anyone who wants to differentiate from competition building x86 boxes will have to use a non-x86 design. Viable ones these days are mostly POWER for workstations and servers, and ARM for everything between mobile and midrange servers.

If you build a commodity computer based on the same components everyone else has access to, your margin will be squeezed. If your product has different characteristics than your direct competition and that difference is advantageous on a given segment, your competitors will have trouble moving in. That's why Apple M-series and IBM mainframe CPUs are not available to anyone without a full computer attached.


> For their computer platforms. Apple has been using ARM since the Newton.

The Newton used a regular ARM design (that was back when Apple owned a huge chunk of ARM). They used 3P ARMs for the main processor of other devices for years (e.g. time capsule or some video dongles).

That one sentence of mine you referred two made two points: 1 - it took a long time for apple to get to the point where they could replace external designs as the core processor for a device (starting with the phone) and 2 - that they had by that time quite a bit of experience for using their own designs for subsidiary processors inside their devices. All of that is a pretty complex undertaking.

> If you build a commodity computer based on the same components everyone else has access to, your margin will be squeezed. If your product has different characteristics than your direct competition and that difference is advantageous on a given segment, your competitors will have trouble moving in.

Well, it’s all about where your locus of differentiation is; all else you want to commoditize. It’s not like Apple makes their own DRAM, and those suppliers are the ones that get their margins squeezed (especially by Apple — look at who their CEO is).

And the lack of a CPU design team is a competitive advantage for the MediaTeks of this world, even if it’s not exclusive to them. Instead the very lack of exclusivity in cpu has a network effect value for them. This is what killed arm competitors like MIPS.


It took a long time for ARM to dominate the low-power space (MIPS and POWER were contenders for a long time). ARM started as a desktop processor and died on the desktop before there was a space for low-power 32-bit RISC devices.

In 2008 Apple acquired PA Semi and two years later they released an iPad powered by the A4, their first in-house design. Many acquisitions and designs later, their performance started to approach the performance of Intel chips used in their desktops and laptops. It was a very complex undertaking and they nailed it on every step.

All Apple computer ISA transitions were driven by external factors. Motorola gave up on the 68K family (a dead end for them) in favor of the 88K RISC processors (where they thought they had a chance), Motorola and IBM gave up on PPC and never delivered a low power G5 (they offered Apple Cell instead). Finally, Intel didn't give Apple the special treatment it needed to differentiate from other laptops and, since by then Apple had phone and tablet SoCs that were competitive with Intel parts and there was lots of commonality between macOS and iOS, that last choice was easy. Depending on a third party for the component more easily usable to set you apart from your competition is never great. If Apple moves their Macs and mobile devices away from ARM, it'll be the first time Apple decides that without a "catastrophic" event from outside.


For Intel, they also missed a lot of targets in terms of both delivery and thermals throughout the x86 Apple era.

As a company, they don't like being at the mercy of someone else if they can help it.


> For Intel, they also missed a lot of targets in terms of both delivery and thermals throughout the x86 Apple era.

True. Up until now, it was always betting the company on someone else's silicon roadmap. Now they control it completely.


~20% Pref Increase ( Clockspeed + IPC ) gives you a Single Core GB5 score close to A14. Considering the transistor and power budget, and the IP cost of X3 I actually think this is pretty damn impressive.


Apple has an architectural license that allows them to build their own compatible processor [1] purely targeted to what they need.

[1] https://www.extremetech.com/computing/319968-google-microsof...


So does Applied Micro, Broadcom, Cavium, Huawei, Nvidia, AMD, Samsung, Marvell, Microsoft, Qualcomm, Intel, Faraday


Apple has something deeper than the architectural license the others listed have. They've gone further with changes than others are allowed to.


I have seen comments saying that but no sources.


It's widely been stated including by Arm that architectural license holders must still fully comply with the Arm Architecture Reference Manual. Apple's Arm cores do not.


In what way are they non conforming?


Dozens of different ways. New instructions, new privilege modes, VHE mode in EL2 can't be turned off, etc.


Obviously some of those are intentional architectural deviations, but is there a good reason to believe the stuck HCR_EL2.E2H bit isn't just an errata?


None of those are exposed to applications or in open source, are they?


This is not true.


Which part? I could come up with a half dozen ways off of the top of my mins that the M1 cores don't comply with the Arm ARM. Not being able to turn of VHE in EL2 is one example. And Arm has stated widely that architectural license holders have to comply with the Arm ARM still.


You are assuming that no other company has this type of license or has done this type of architectural changes. That is not true.

See monocasa's comment.


I am monocasa; that was my comment.

And the reason that I'm assuming that no one else has the same 'special relationship' is that it's a not well kept secret that Apple designed large parts of the base AArch64 with ARM and simply owns the IP on large parts of the ISA, similar to how at this point both AMD and Intel practically own x86 due to their different contributions over the years. Qualcomm, et al can not say the same.


Huh, I meant MikusR :(

Anyway, I think my comment is still valid.

The "special relationship" is just fanboy fantasy. Qualcomm and Marvell have both in past done ground up designs that significantly differs from any ARM silicon.

If you don't believe me, fell free to provide actual proof of this special license.


It's been posted elsewhere, diverging from the ARM Architecture Reference Manual in ways that other chips have not. New instructions in spaces like ML, decompression, new privilege modes, features that are fixed on that are specficed to be optional at runtime, etc.

Particularly the new instructions are damning as ARM has said they are specifically disallowing custom instructions this go around in order to combat architecture fragmentation.



If you read that link, it's specifically for ARMv8-M microcontroller class cores. ARMv8-A has no similar program.


I've yet to see QC add custom instructions to their cores.


AppliedMicro's architectural license is now held by Ampere.


Do Apple's chips keep 32bit compat? That seems a big win in the x3.


No, like the X3 they dropped 32-bit compatibility (Apple dropped it in the A11, ARM dropped it for Cortex-X in the X2)


Given that Apple's CPUs aren't available in cloud deployments, I couldn't care less other than when doing iDevices projects.


This is still a mobile chip. The real "flagship" cores should be in the Neoverse line.


Neoverse is a mobile chip based on putting massive amounts of aggregate power vs die area. It's not a flagchip architecture because it's very conservative as far as execution goes. It just lumps stacks of cores on a single die and lets the OS/workload dictate the performance.

In comparison an X3 is a much faster chip clock-for-clock per watt on a per core basis because it's built for ultimate ST performance on a power budget. X3 has a full 6-way decoder vs 4 on Ares. So straight off the bat you can decode 1.5x as many instructions. X3 has a 320 entry ROB vs 128 on an N1 so you can keep over twice as many instructions in flight. An N1 has 3 ALU pipes, an ALU with MAC and DIV, and two FPU pipes. X3 on the other hand has 4 ALU, an ALU/MUL pipe, and an ALU/MAC/DIV pipe. On the FPU side it has two FMAC pipes and two FDIV pipes.

The X3 is a much larger and more powerful core than an N2 little alone an N1 and deserves to be called a flagship core in every sense of the word.


Waiting for a fast linux sbc with actual software support (mainline)


They could call it Cortex-Vapour. I mean very much every popular MCU is out of stock anywhere. Throwing new cores into broken market will only exacerbate problems - the production lines will be clogged by new cores while businesses still wait for the older cores.


Here lies an unique property of semiconductor fabrication: unlike other resources like grains where you can just feed in different raw materials to a factory and get your desired output, it is very difficult to reconfigure a fab workflow to use even a slightly different process, and downright impossible to switch between generations (5 nm to 22 nm) without near complete replacement of every equipment.

You won't be able to directly increase production of what's actually really hard to get at the moment (microcontrollers and small functional devices using "ancient" process nodes) by decreasing production of cutting edge processes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: