I feel like this is the exact opposite of what they should be doing.
They should start producing ARM chips. They could finally get a piece of the smartphone market, which would not cannibalize their x86 sales (which I assume is the reason why they have refused to use ARM since the sale of xScale).
A lot of people don’t remember that Intel was a huge early ARM licensee. If you were building a smart mobile device 25 years ago, you were probably seriously considering the Intel StongARM SoC. They then followed up on this with the more advanced ARM XScale family of SoCs, which you’d likely use if wanted to build a ARM battery-powered smart device in the early 2000s. Background per Wikipedia:
> The StrongARM is a family of computer microprocessors developed by Digital Equipment Corporation and manufactured in the late 1990s which implemented the ARM v4 instruction set architecture. It was later acquired by Intel in 1997 from DEC's own Digital Semiconductor division as part of a settlement of a lawsuit between the two companies over patent infringement. Intel then continued to manufacture it before replacing it with the StrongARM-derived ARM-based follow-up architecture called XScale in the early 2000s.
However, after developing and manufacturing these for nine years, Intel exited this business by selling their ARM unit to Marvell. Intel was developing its own “low power” x86 chip, the Atom, and decided to put all its mobile eggs in that basket, which unfortunately was never as low power as comparable ARM designs. I suspect Intel also saw that the number of licensees in the ARM market was growing and competition along with it, their value-add wasn’t that great, and their margins were necessarily smaller due to the ARM licensing fees.
I remember they, at some point, sold their perpetual license (was it with the Marvel deal?). Before that, IIRC, they didn’t need to pay volume-based licensing to make their ARM variants.
The problem is that the barrier to entry to producing ARM processors isn't that high. ARM will allow anyone who is paying the licensing fee to become a licensee, and then you can build at TSMC or Globalfoundries or any other fab that will take you.
Intel can't compete in that market. The margins are too low: they're organized and staffed to a higher margin business. They're used to the high operating margins of x86. Now that their competition (mainly AMD) has become competent in the past ten years and x86 is decreasing in importance, they're losing their main competitive advantage.
> The margins are too low: they're organized and staffed to a higher margin business.
Intel has a massive fab capacity problem right now, in that their fabs aren't being utilized. Doing manufacturing of a high-volume product is exactly what they need, and if that high volume product is their own - all the better.
If they can’t build decent x86 chips when they only have a single competitor switching to an inferior architecture (in the sense that it would require massive investment for it to catch up with x86) with no entry barriers for new competitors would be one of the most absurd things they could ever do.
Are there any US fabricators of ARM chips? Any fabricators in any Western nation? If not, Intel could have tapped into the markets for defense and national security.
First, how does selling stock prevent them from doing that? Second, switching to Arm doesn't automatically give them a competitive microarchitecture for smartphones.
Making ARM would make very little sense. In x86 they have to differentiate against AMD. With ARM they need to differentiate and compete with the whole rest of the industry.
It’s also symbolic - Intel’s goal is to outcompete ARM (and AMD, Nvidia) in personal computers and the datacenter market. Divesting from ARM Holding sends a signal there is no upside for Intel coming from ARM’s progression up the processor chain.
Too late for that. Smartphone SoCs is a low margin business that controlled by Qualcomm (or in-house like Apple, Google and Samsung to some extent) on the high-end and it would absurd for Intel to try and compete for the low-end/budget market.
Yeah, they had the chance to dominate the ARM market with XScale but that ship has long sailed.
Also how is having a small, insignificant stake in ARM itself related to this anyway?
> But did you know there are many new core designs every year, and that very few of them are from ARM?
Really? It took years for Qualcomm to become competitive with ARM’s cores again. Besides them and Apple what else is there? (Ampere I guess, but they are still insignificant and in an entirely different segment)
This isnt to surprising. They need money to continue their transition to their new Royal CPU baseline and frankly everyone and their dog is now in the ARM market, if they want to see growth its time to move onto something else.
It is when your products are trailing competitors on both fronts (AMD and Nvidia) and are facing potential class action lawsuits over existing products (13th and 14th gen CPUs).
The thing is that this is a tiny amount in the grand scheme of things. It's been a while since I looked at the financials, but IIRC it was something like $33b per year of revenue, and $26b per year for R&D. Selling off ARM is something like a day's revenue, so it won't have been done just to raise a bit of cash, more just a recognition that it has nothing to do with their core business right now or their future plans.
From those approximate figures above (that I probably misremembered), personally I wouldn't worry too much about Intel short-term. They can get back to positive cash flow quite quickly by reducing the R&D expenditure quickly as long as they get over the PR headache of the current gen chips, and seemingly the new microcode is reducing peak voltage spike on single/two-thread loads. There's a lot of bad press right now, but the majority of business comes from data centres who probably weren't cranking all the overclocking levers and I'd guess any recalls will almost all be from the much smaller ultra-keen gamer segment.
Longer term, there's a risk if they cut the R&D spending too much and then find it hard to re-hire those engineers, but there's a good chance they'll get competitive again when they finally get their new process working, and that might be what they need to finally get their TDP levels back competitive with AMD again. It's worth watching the Asianometry video about backside power delivery - if they get this right, there's a very good chance of future Intel chips being more power efficient than AMD's.
Every architecture that survives gets filled with extensions. That's not the issue.
Even the time period is not the issue per se, though it plays a part in it. The real dividing line is more like 1985 than 2000 though.
CISC is the issue (at least, at the ISA level). Processors of that era were made to be "friendly" to assembly programmers. There are redundant instructions with slightly different semantics, there are many addressing modes, there are not many registers, and the instruction encoding is an illogical variable-length mess hidden away by the assembler.
RISC on the other hand starts from the premise that humans don't write assembly much anymore, they write high-level languages (yes even C counts) which get translated into machine code by a compiler. So an ISA is a compiler target first and foremost. So it's better to spend silicon on more registers and more concurrent/parallel work than on addressing modes and convenience instructions.
Obviously this battle has already been fought and won on a technical level by RISC; that's what the R in Arm originally stood for, and no modern CPU design even considers CISC. Heck, even x86 uses RISC-like micro-ops under the hood and the 64-bit version of it added a lot more registers. But the CISC instruction decoder is still there taking up some silicon.
The real question is: does that extra silicon actually matter that much in practice anymore? It's dwarfed by the other things a modern processor has, especially the cache. The answer seems to be contextual even today. An x86 server, desktop, or gaming console running at or near full load on CPU-intensive tasks still outperforms Arm (though by less and less each year), even though Arm handily takes the crown at lower power levels. Is that due to ISA alone or other factors too?
I thought we were in the context of the competition between x86 and Arm but sure, what is a non-last century ISA then? RISC-V, a very conservative rehash of an architecture from the 1980s? Realistically there are no other commercially viable contenders, sadly.
If you want to criticize x86 then you may want to support the assertion. Maybe something along the lines of bloated legacy features and horrible power requirements. However, in my opinion it's perfectly fine for a desktop computer. Just not so great for phones, tablets and portable computers.
I'm pretty sure that x86 could be more power efficient than it is now. But I have rarely run into an implementation of x86 that wasn't pretty dang power hungry. Probably for the explicit reason that it isn't a huge priority for the stuff they run it on.
I think the lowest power ones are Atom and Celeron. All the other low wattage CPUs are RISC based. I think AMD is also coming up with something similar.
Nearly all x86 CPU architectures originate from desktop/server CPUs which had much higher thermal and power limits, but nearly all ARM CPU architectures originate from embedded CPUs where there are strict thermal and power limits.
It's not as if there's something fundamental with x86 that makes it use more power. There used to be the case that x86 CPUs had many more transistors (and thus more power usage and heat) due to the need to implement more instructions, but I don't believe this is the case anymore. I think right now is that its easier to scale up an architecture, than it is to scale down.
If you look at something like Project Denver, it started as a x86 compatible CPU that would of had ARM-like power usage.
> I think the lowest power ones are Atom and Celeron. All the other low wattage CPUs are RISC based. I think AMD is also coming up with something similar.
AMD Had Bobcat and later Puma, IIRC there were some <5W parts in both of those families. I still wonder -why- they never did a dual channel version of those parts for desktop, my guess is it would have made Bulldozer look extra bad somehow.
>...horrible power requirements. ... perfectly fine for a desktop computer. Just not so great for phones, tablets and portable computers.
Except no one buys desktop computers any more: everyone's using phones, tablets, and laptop computers. There's also servers, but even here power efficiency is important; reduced power requirements for a datacenter would save a lot of money, not just in direct electricity consumption by the servers, but also from reduced cooling needs.
Ah, that is why Sony and Microsoft are now eyeing PC versions of console games, because no one ever buys gaming rings any longer, aka desktop computers.
Also what do you think most contributors to LLVM, GCC, CUDA, VFX,.. use as daily driver?
I don't think those numbers are correct at all: they appear to be grouping laptops in with "personal computers". We're talking about desktop computers here.
Yeah, and not only that, many of those "desktops" are probably "workstations". We had heavy compute requirements at my last job so we were assigned workstations with dual Xeon CPUs, not standard Intel desktop chips.
Outside of a couple unusual places like that, laptops have been the standard for workplaces over the last couple decades for me.
Before that laptops were not quite limited to managers, but they did signify higher status (if not necessarily pay). Today it‘s iPads that signify status.
I was shocked when I learned that the majority of the Meteor Lake CPU is made by TSMC and that for the next gen Lunar Lake due in September EVERYTHING except the base tile (which is just a 22nm PCB) will be made by TSMC.
Wow. Then, they should sell off the fabs, just throwing in a commitment to use them for the older products as a sweetener. If they are using TSMC for the latest gen, the advantage of being an integrated company is gone; and both parts would be better off being run purely for their own market.
The US wants the Intel fabs to be a going concern for strategic reasons but even for them it would be better to put in the investment to make it so, than have them wither away inside intel.
I definitely want my tax dollars (CHIPS monies) clawed back from Intel since among those recently getting laid off is R&D, right when Intel needs all the R&D brains and hands they can get.
Intel just keeps going farther down the drain, much like another American mainstay, Boeing. At least Intel isn't killing hundreds of people in fiery crashes, or having pieces of their CPUs falling off during use.
Judging by some of the CPU issues coming out of Intel these days, the fiery crashes are not far off.
I remember learning about CPU design and manufacure as a kid and was absolutely astounded that any of this shit could be made to work in a practical way. In my eyes Intel was immaculate perfection.
Bought at the time of Arm’s float to show confidence in Arm and that Intel is a key partner - which it wants to be in the shape of Intel Foundry.
Changes nothing material in the relationship. More of a tidying-up exercise with a small profit than anything else.
Edit: This is a much more interesting story on the development of the Softbank / Arm / Intel relationship.
SoftBank scraps AI chips tie-up plan with Intel.
https://news.ycombinator.com/item?id=41265804