Hacker News new | past | comments | ask | show | jobs | submit login
The Intel Split (stratechery.com)
322 points by feross on Jan 18, 2022 | hide | past | favorite | 179 comments



In short, Gelsinger and his team are configuring the company so Intel's internal manufacturing operations must earn Intel's business, in competition against every other fab-as-a-service provider in the world, including TSMC.

The risk is obvious: The company's internal manufacturing operations can survive only if they figure out how to outperform competitors. The upside is that all other parts of Intel stand a good chance of surviving and thriving -- but that remains to be seen.

Only one thing is certain: Gelsinger and his team have a lot of cojones.


This means that Intel manufacturing is used where it makes sense and vice versa isn't used where it doesn't make sense.

I can see those general outcomes:

* Intel manufacturing is soon again better than other manufacturers in all aspects as it has been a long time. All Intel business goes to Intel manufacturing and things return to what they were.

* Intel manufacturing is better in some aspects and continues to outcompete other manufacturers in some areas. Most Intel business goes to Intel manufacturing but some special business goes to other manufacturers. Some other companies use Intel where they have an advantage. Everybody wins.

* Intel manufacturing cannot keep up with the market leader. Most Intel business goes to the leading Fab. Intel manufacturing cannot catch up as they don't earn enough money to do so. They start cutting costs and compete downmarket with lower tier fabs. In the long run Intel becomes fabless.


Another possibility is that the market is changing. The age old Xeon/Core/“binned low end” model may not be viable anymore as hyperscale cloud providers start to rule and people don’t care about CPUs as a differentiator. Add in COVID chaos and the answers are unclear.

Hell I just replaced thousands of older desktops with slower laptop chips — nobody noticed. Frankly, I only bought new devices because of Microsoft’s bullshit Windows 11 requirements.

The guaranteed cash flow train is slowing down, which makes setting billions of dollars on fire with high risk fab processes a big deal.

Intel had to adapt like this in the early 90s when memory became a commodity. We’ll see if they make it.


> Hell I just replaced thousands of older desktops with slower laptop chips — nobody noticed.

This is certainly plausible if these were mostly Office users but how confident are you that you'd know if they did notice? Most of the large IT departments I've interacted with would post glowing self-evaluations while the users had plenty of complaints and/or were doing their real work on personal devices. Depending on how the business is structured and the relationships, this can be hard to measure without a fair amount of effort — I've heard people say they weren't going to report something because they didn't think it was worth the effort or they were sure that the IT people would notice and fix it.


The shift to the cloud has turned computers into glorified terminals again. For terminals CPU power is a lot less important than for a personal computer.


Have to disagree here. Cloud apps (in a form of web apps) demand more CPU performance, not less! Apart from specialized apps that really do a lot of computing (rendering, modelling, model-training) that can easily be done in cloud, most apps are slower (or functionally constrained) if they're moved to cloud and used as web app. Think Office 365 suite - every app of the suite is either slower as web app, or functionally constrained.

For me personally, web apps only got fast when I got a computer with M1 processor.


I understand the argument but I don't think that's enough – even on a recent Intel laptop CPU, you'll be hearing the fans if you have more than one of Outlook, Teams, etc. active or any enterprise security software even if most of the data is being stored on the cloud. Something like iOS or ChromeOS is a lot closer to delivering on that idea in my experience.


> I understand the argument but I don't think that's enough – even on a recent Intel laptop CPU, you'll be hearing the fans if you have more than one of Outlook, Teams, etc. active or any enterprise security software even if most of the data is being stored on the cloud.

Google Docs runs well on a Chromebook. As does MS Office online. The modern desktop has really turned into a terminal - it's jut using HTML & JS and works... pretty well.


Yeah, that’s why I said Office users — the core experience isn’t bad at all any more. It’s usually other apps which are pushing CPU limits notably. A big one recently has been video conferencing since a lot of schools didn’t used to need that at all, much less large classes entirely online.


Video conferencing works really well on a $200 Chromebook or $100 tablet.


Indeed, WordPerfect, word, excel, etc, all worked just fine on a pentium chip of the day. Yes there was always something faster but really… it was just fine.


We invested on memory. Shitty electron apps like Teams chug ram!


> Hell I just replaced thousands of older desktops with slower laptop chips — nobody noticed. Frankly, I only bought new devices because of Microsoft’s bullshit Windows 11 requirements.

Big enterprises think like that yeah, because they can't get their crappy Dell Inspirons to new employees anymore.

Personally I just had to get a Ryzen laptop replacement temporarily because Lenovo took too long to repair my thinkpad. And the result was delightful. Better sound, fantastic compile times, beautiful screen for half the price of my thinkpad. Next time I'll get a cheap Ryzen and if it breaks just buy a replacement device instead of relying on a pro intel enterprise device with crappy overpriced enterprise service.


Agreed - but that’s a problem for Intel, as they need enterprises to lap up those shitty Dell laptops to keep the throughput going!

NYC Public Schools buy like 200,000 computers a year. A decade ago, it was probably 50% higher due to faster refresh cycles. There are a lot more big dumb enterprise endpoints than you might think. When I sold computers in college to commercial customers, the average refresh was 30-40 months. Now it’s closer to 60.


Yes, the market is changing -- on all fronts for Intel. Many people talk about this being a tech problem, really its more like the logic at companies when you hold a close to monopoly position -- why invest tons of R&D dollars when you can just increment more slowly and make the same money.. So they end up with a loss of tech leadership when they easily could have had it. The biggest mistake that Intel appears to have made was their failure to bring up the new 13nm EUV processes at the same rate as TSMC -- probably they were hoping it would fail, but it did not. Now TSMC has done a couple of nodes with EUV and they have some customers that are spending tons of money right now like Apple, AMD, nvidia, qualcomm, etc and now TSMC is flush with cash and have a strong runway in front of them -- its like TSMC is already in execution mode while Intel is still trying to get things working. Intel basically thought, hey lets see how long we can milk our big cloud customers with $2000 processors and only minor improvements. Those customers realized they could hire a chip team themselves and build their own processor for the amount of money they were paying Intel each year, so they did that. So they got attacked on the enterprise side by their own customers making their own chips like Amazon and Google, the consumer side by AMD just throwing lots of cores at the problem -- and Apple dumping x86 in their laptops -- and soon to be others, and they essentially have nothing in the mobile space. So they went from being dominant 10 years ago to being 2nd place in multiple of their core businesses. Clear failure of leadership to realize they need to maintain technical supremacy otherwise they will not be able to charge the prices they want to charge. Now their competitors are shipping chips with 64 cores and Intel's plan to uber slowly release 16 cores, then 20 cores, 24 cores, etc over a 8 year period or whatever their plan was is blown up..


I'm still bullish on $AMD to break much further into the enterprise space. So many AWS instances still run on Xeons. That's slowly changing, especially now with stuff like Graviton , but I think AMD can go much higher just on enterprise.


The problem that the industry faces is that the economics and reliability of these chips have been undermined on the recent nodes.

Sophie Wilson has said that cost-per-transistor is rising since we shrank below 22nm, and that a 7nm die cannot run more than half of its transistors at the same time without melting.

Sophie addresses the cost rise at 22:00.

https://m.youtube.com/watch?v=zX4ZNfvw1cw

Heat is more of a problem because finfet can't dissipate as well as planar, and reliability is declining at smaller nodes.

"With a planar device, you do not have to bother about self-heating. There are a lot of ways to dissipate heat with a planar device, but with finFETs that is not the case. The heat gets trapped and there are few chances for that heat to get dissipated."

https://semiengineering.com/chip-aging-becomes-design-proble...

https://news.ycombinator.com/item?id=29889951

If I needed a 20-year service life, I would not choose 7nm.

Edit: I did not realize that ARM1 had fewer transistors (~25k) than the 8086 (29k), and over ten times less than the 80386 (275k). Intel should have bought ARM in the 80s; instead Olivetti got them.


> Sophie Wilson has said that cost-per-transistor is rising since we shrank below 22nm

The data has been saying otherwise. 5nm is the only node that increased $/transistor beyond it's previous node (7nm), and that's at a time when Apple payed out the ass for a monopoly on the node except testing shuttle runs from competitors but isn't a sign of a fundamental increase in cost.

https://cdn.mos.cms.futurecdn.net/Unwdy4CoCC6A6Gn4JE38Hc-970...

> and that a 7nm die cannot run more than half of its transistors at the same time without melting.

That's true for trying to run multiGhz designs in classic plastic package BGAs like most cell phone chips, but that's been true for a while, hence why flip chip packages are a thing. Actually having a heat spreader connected to the die goes a long way.

Wilson's comments aren't incorrect from a certain point of view, but tend to get extrapolated out of her niche to the greater industry in a way that pushes the statements into inaccuracy.


Thanks, the cost per wafer does look convincing. I wonder where Wilson's figures emerged.


I think what's happening is that initially a node is more expensive, because of consolidation of the market and supply/demand amongst all of the fabless customers until capacity fully comes up. Once we're back into steady state we see the traditional economics of $/transistor falling.

That sort of coincides with TSMC having competitive, close to leading edge nodes (so the 28nm timeframe) which would line up with the rumor. The information simply hasn't been updated over the timeframe of the node. Previous to that the cost of the node was pretty fixed as long as someone like ARM cared about, now there's a lot more economic effects from the increased buyer competition that heavily changes final cost over time.


I believe her talk was from late 2017, so 7nm would have been expensive.

At the same time, AFAIK Intel was doing quite well at 14nm finfet even then (likely better than any other foundry?), but that production capacity was not available to ARM, so I guess it didn't count.


Yeah exactly. I want to be clear, I've got a tremendous amount of respect for Sophie Wilson; she's a much better engineer and more connected to how the industry is going. Her statements simply require a lot more caveats than they are normally given. It's more about the much the changing position of ARM and TSMC in the market place than anything else.

> At the same time, AFAIK Intel was doing quite well at 14nm finfet even then (likely better than any other foundry?), but that production capacity was not available to ARM, so I guess it didn't count.

Yeah, and Intel was right in the middle of their biggest misstep. Intel 10nm was in risk production for Cannon Lake with awful yields and therefore a huge $/transistor. It got shipped anyway as one SKU in May of 2018 that didn't make financial sense (there's rumors that management bonuses were tied to that release), before being relegated to process improvements for years until just recently.

It would have been fair for her to extrapolate a trend there that actually ended up being more complex in hindsight.


Anecdotally, I recently had a CPU fail for the first time and it was a 7nm one. Sent it to AMD, they verified the failure and sent a new one back. Meanwhile I have had assorted 22nm/14nm processors around the house chugging along for years without any issues.


This problem is not specific to Intel. Many chips are not using the smallest node sizes and old fabs get a second life producing those chips instead of CPUs. That could soon be the future for Intel fabs.


That's where a fab goes to die. It needs to be fully capitalized at that point or else you don't have the money for R&D on leading edge. Intel going this direction is the direction of GloFo and no longer even attempting leading edge node production in the US anymore.


We're going to need stuff like processors with built-in microfluidic cooling layers. Why not have club sandwich construction with every other layer working on heat extraction, power, or both? I see a future with cube-shaped processors with hundreds of layers.


I think there is a pretty clear outcome, Intel will go fabless for highest end chips and continue as normal with everything else. They will have many customers (USG) who will require domestically manufactured chips. But they also need to compete with AMD, Apple, and friends and this will allow them to fab out to TSMC while, for lack of a better term, saving face.


This wont work. Moving high margin parts to TSMC will wither Intel fab resources to keep up in the mid range. Soon enough you start to struggle like GlobalFoundries.


Fabless scenario will not happen, ever.

One simple reason that similarly to aeronautics chip manufactures are critical industries for US.

US government will not let either one die.


> US government will not let either one die

This isn’t Intel’s shareholders’ problem. Split the businesses. If the U.S. wants to bail out Intel’s fabs, let them. (Though a better strategy would be a general subsidy for anyone building fabs in America, which should be a thing.)


We don't need small nodes for military/aero chips. It would suffice to keep some old fabs running to secure supply. The US needs fabs but they don't need Intel as a company.

Splitting out the fab business and letting it run into the red however could incentivize the government to save the fabs and the jobs with billions of bailout money while the other Intel continues to rake in money with TSMC chips.


The military doesn't just need chips for cruise missiles and radars. There has actually been a big push the last 20 years to move a lot of systems to COTS architectures. An Arleigh Burke destroyer uses Xeon powered blade servers to run its Aegis combat system not some weird proprietary chip from the 80s.

The level of computerization and networking is going up in the military so the needs will only increase. Intel's CONUS fabs are a national security concern.


Literally every sentence is out of touch and outright wrong. I don't even know where to start.

> We don't need small nodes for military/aero chips. It would suffice to keep some old fabs running to secure supply.

What? How do you think the military works, via smoke signals? They need computer, a LOT of computers to operate day to day. And that PCs for just boring clerk work. Not to mention the idea that old chips would somehow be good enough to keep up technologically, both military and economically.

> The US needs fabs but they don't need Intel as a company.

Oh yeah? They will just a walk a engineer corps to a fab and have them press big red manufacture button few million times to make enough chips right? Can't be more difficult than that...


What is the problem with an Intel design manufactured in a TSMC fab built on US soil? There will be fabs on US soil in the future but they might not be owned by Intel.


It probably depends on whether the chip is a commodity chip or something special

https://en.wikipedia.org/wiki/International_Traffic_in_Arms_...


hmmm I dunno... maybe China taking Taiwan and banning all export of electronics to US, leaving US with pants down


If China invades Taiwan, the rumor is that Taiwan's strategy is to scuttle everything at the last second so China isn't additionally incentivized to invade just to get the electronics manufacturing infrastructure.

So there won't be any electronics manufacturing to ban at all, but it leaves the US in the same place.


Increasingly we do. The military is very much embracing edge compute currently.


This assumes that US military has no need for servers, desktops, or mobile devices.


Which will be covered by fabs built on US soil which presumably are contractually obliged to manufacture chips for the military on demand. (And obliged to take the precautions to be able to do so.) Some of them can be (ex) Intel fabs, some TSMC or Samsung. I don't see any hard necessity for Intel to keep their fabs in house when there are other solutions.


Well, what if TSMC created an independent US based manufacturing presence that was DOD qualified? Or one of the other currently non-US manufacturers?


I am not certain, but I think they can’t. I think those critical functions cannot have foreign owners or officers if they want to sell to the government.


Does an Arizona fab of TSMC count as an American fab?


Agreed except there's only 2 outcomes: #1 (win) or #3 (lose). The chips business is a winner-take-all market where if your product isn't the best (in efficiency, performance, whatever metric), then it can't command a price premium nor the attention of OEMs. Also the core business isn't consumer but for enterprise/datacenter market where efficiency/performance is paramount.

If #1, then everything is rosy and Intel will regain its dominance from the days of Sandy Bridge. Its chips will be fabricated internally and they get to enjoy the profits from vertical integration.

If #3, then Intel will certainly spin off or sell off its manufacturing/fabrication business very similar to AMD more than a decade ago.

The caveat is that I don't think Intel's problem right now is simply a manufacturing issue. Sure their Alder Lake is competitive but it's not superior to AMD's offerings or even measured against Apple's SOCs. Remember that unlike the last iteration (Rocket Lake), Alder Lake doesn't suffer from a mismatch in design vs. manufacturing cadence - it's arrived as expected.


> The company's internal manufacturing operations can survive only if they figure out how to outperform competitors.

Is outperforming competitors necessary for Intel's survival? There's plenty of fabs in the world doing quite alright well behind TSMC, the vast majority of which can't even come close to Intel's current capabilities [1]. Even if Intel never succeeds with their existing process roadmap - which is not on pace to beat TSMC - they still possess some of the most advanced chip manufacturing processes in a world that's all the more dependant on chip manufacturing.

GlobalFoundries got out of the race on 14nm - a 14nm not as good as Intel's 14nm and far behind Intel's 10nm - and is still posting YoY revenue growth, despite losing much of the volume they were producing for AMD over the last few years.

In addition to that, I suppose that even if Intel merely succeeds in roughly keeping up with TSMC and Samsung (their current 10nm being on par with Samsung's and TSMC's 7nm would classify as "roughly keeping up", I think) there's American national security interests at play. Especially so if Intel's manufacturing capabilities are accessible to (American) third parties. No way the powers that be would let Intel's manufacturing plants go under.

It's a pretty bold strategy at face value, but I think it's actually a pretty straightforward choice and the risks aren't all that existential.

[1]: https://en.wikipedia.org/wiki/List_of_semiconductor_fabricat...


>No way the powers that be would let Intel's manufacturing plants go under.

Perhaps that's part of the calculus too; if they know that Uncle Sam will backstop any failures, are they not also making the same play as their competitors were in the 80s?

    The problem is that by the mid-1980s Japanese competitors were producing 
    more reliable memory at lower costs (allegedly) backed by unlimited funding 
    from the Japanese government, and Intel was struggling to compete…


In the short run (say, a time horizon under 10 years), I think you're right. In the long run, I think failing to outperform the competition persistently would lead to gradual irrelevance, unsustainable margins, and eventually a slow death of the manufacturing side. I mean, that's the path Intel's manufactoring operations were on before this recent change in strategy: the path to a slow death.


A slow death over 10+ years can happen whatever Intel does. Doesn't make this a particularly big cojones move either.

One needs quite a crystal ball to predict such slow deaths, especially when it's going to be decade where chip production improvements are expected to slow down further and further. Finally, a slow death looks even more likely anyway if Intel doesn't change their ways.


As an ex-Intel employee it is interesting to see.

This seems to be sound long-term though it may cause significant shrink in the business in next couple of years to cut lines of business that are not above water.

From the point of view of infinite game it makes a lot of sense to diversify and then cut any branch of business that is not profitable, before it drags entire company.

While I think it is difficult to predict results, one thing is sure -- this will force a lot of change in Intel and we would likely not recognise the company ten years from now.


>Only one thing is certain: Gelsinger and his team have a lot of cojones.

Trained and mentored by Andy Grove. Disciple of the old school "Only the paranoid survive". I expect nothing less.

I hope someday he could do the next unthinkable, Once their foundry could at least compete against Samsung, open up the x86 ISA. Or clean up the ISA and call it something like AE86. In the very long term, x86 is pretty much dead. By 2030 you would expect all hyperscaler to be on ARM with x86 serving some minor legacy clients. You have PC manufacture eager to compete with Apple and ChromeBook. So x86 could loss up to 50% of its current volume by 2030. Then a long tail of 10 - 15 years decline.


AE86 would be a good architecture for running in the '90s.


>running in the '90s.

Is a new way I like to be

Is a new way to set me free

The naming is intended to show old things can be made good, and fast :)

( Nice to see somebody gets it )


> Intel's internal manufacturing operations must earn Intel's business, in competition against every other fab-as-a-service provider in the world, including TSMC.

I think this might be an interesting retrospective application of The Innovator's Solution's 'be willing to cannibalize revenue streams in-order to avoid disruption.'


I don't see it as that much of a risk. One of the reasons Intel has so thoroughly dominated for so long was that they were at least at parity to slightly ahead on process nodes. If they don't get back there fast, they are in real trouble. Intel likes to command a price premium and you can't do that in second place.


I think some of the advantage comes from synergies of having microarchitecture development and process development under one roof. You can fine tune them together to get the best performance/power ratio. Even if both alone are just on par with the competition, together they are still ahead by a few iterations. Also they get out new products a bit faster.

The problem is that by switching to another fab they lose these advantages.


It is quite a bit more than parity to slightly ahead. From 130nm to 22nm the Intel process advantage was enormous. That the horrible P4 architecture was able to compete and be cost-effective is testament to that. Intel architects and designers never had to work very hard to beat the competition.

That is one of the reasons Intel was able to get away with their garbage in-house EDA tooling and processes. Intel was also able to get away with delays to fix their bugs (due to awful validation methodology).

Even if Intel process catches up TSMC, they would have fix their design tooling, hire talented engineers and get rid of the entrenched hacks who built their careers on mediocrity.


> Intel's internal manufacturing operations must earn Intel's business

This is putting lots of high-level employee's future earnings on the line in a far more direct manner. It will be interesting to see if they accept this challenge, or fight it in order to accept the slow decay that will still ensure at least a longer-term personal financial gain (i.e., instead of failing in 2 years, failing slowly over 10).


> In short, Gelsinger and his team are configuring the company so Intel's internal manufacturing operations must earn Intel's business, in competition against every other fab-as-a-service provider in the world, including TSMC.

This is a monumentally stupid idea and, even worse, we have seen it before. Every company that has done this in the past is gone.

I would say that every VLSI designer in Intel should be printing their resumes, but I'm pretty sure that the decent ones left long ago.

In addition, this completely throws away the advantage that having a fab brings at a time when it's finally important.

Fabless works great when you can push a synthesis button every 18 months and your chip doubles in speed. Smart design can't keep up with that.

However, once Moore's Law stops and you can't run every transistor without melting your chip, suddenly design matters again. You want to be able to do creative things with a bit of tweak from your fab.


This was always the case. If anyone tells you otherwise, they do not know. In fact, every team at Intel operates as a P&L ruthlessly, not just manufacturing.


It seems like echos of, and a less drastic version of the AMD/Global Foundries split.

And then Global Foundries couldn't keep up, cancelled their 7nm work, and AMD has been sending more and more business to TSMC.


Why stop there, make various parts of manufacturing compete against each other. Get this thing going Sears-style, it'll be lots of fun for those of us on the outside.


decoupling


I don't think Intel's manufacturing problem is purely that of incentive which will be fixed by a split. That's massive oversimplification.

The approach Intel is taking - outsource cutting edge products to TSMC while continuing to invest in their fabs making their lower end stuff and other people's stuff like automotive chips in-house is the best strategy to buy some time to advance their fabs while letting them earn money to support R&D investments.

It's a huge problem and nobody except TSMC has succeeded at it. Besides there is years of lack of focus, incentives and interest in specialized education and manufacturing processes that'll take time for Intel to fix. Meanwhile they will be competitive in consumer markets by going TSMC 3nm and continue to improve on the side by taking on outside fab orders. Seems reasonable to me.


Intel is already competitive on 10nm (Intel 7) with Alder Lake on desktop against AMD; I'm frankly impressed to see what Intel's designs on TSMC 3nm will do.


Only competitive if you don't look at energy usage, Intel relies on cranking the energy usage in order to get their chips almost to compete with AMD.


Not really: https://www.reddit.com/r/hardware/comments/qwn1j9/core_i9129...

The holistic efficiency comparison definitely favours Intel on desktop, as Ryzen has heavy idle power usage, due to its 12nm I/O die from GloFo.


idle power draw on desktop isn't a major concern. When people talk about power draw in a desktop context its really an indirect way to measure heat. Managing thermals at heavy load is the most important consideration. And if your chip manages to win benchmarks by creating massively more heat in the process then that is worth noting.

In laptops idle power draw and heat are far more important because they effect system endurance. In fact most gains in mobile device power management are with lowering idle consumption not load consumption.


> idle power draw on desktop isn't a major concern

It certainly is if you're a major corporate entity deploying tens of thousands of desktops idling 24/7. The 30-40W IO die idle cost adds up.

If you're doing distributed rendering you're better off buying Ampere Arm CPUs anyways as they're more power efficient than any x86 offering.


That's just moving the goalposts. Businesses aren't buying loads of gaming desktops with K sku cpus though. They are getting i5s or Ryzen 5s that do have low idle power usage. We were talking about performance chips.


Alder Lake is more power efficient under low to moderate loads, which includes all gaming. It's only less efficient under loads pushing it at 100% on all cores when it's on a high power limit.


Ironically, Alder Lake is only efficient when it doesn't have to use the E-cores.


Intel's (literally) doubling down on the Atom cores in Raptor Lake so they'll have to get it right for them to have a Zen 2-esque progression. CPU advances beyond the old +5% per generation are pretty exciting.


I'm suspicious that reliability issues stemming from an extreme power draw and level of heat generated when you run AVX-512 instructions was the reason for those instructions being disabled recently.

>One of the big takeaways from our initial Core i7-11700K review was the power consumption under AVX-512 modes, as well as the high temperatures. Even with the latest microcode updates, both of our Core i9 parts draw lots of power.

The Core i9-11900K in our test peaks up to 296 W, showing temperatures of 104ºC, before coming back down to ~230 W and dropping to 4.5 GHz. The Core i7-11700K is still showing 278 W in our ASUS board, tempeartures of 103ºC, and after the initial spike we see 4.4 GHz at the same ~230 W.

There are a number of ways to report CPU temperature. We can either take the instantaneous value of a singular spot of the silicon while it’s currently going through a high-current density event, like compute, or we can consider the CPU as a whole with all of its thermal sensors. While the overall CPU might accept operating temperatures of 105ºC, individual elements of the core might actually reach 125ºC instantaneously. So what is the correct value, and what is safe?

https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-...


In that particular benchmark (3d particle movement), the 11700K performed about 4x better than the AMD 5900X. Performance/watt clearly wasn't suffering.

Perhaps it could downlock more to address the wattage while still coming well ahead in terms of performance, although some older CPUs doing this gave AVX512 a bad reputation.

Few workloads are as well optimized to take advantage of AVX512 as the 3d particle movement benchmark, so both the increases in performance and wattage seen in that benchmark are atypical. If they were typical, then AVX512 would be much more popular.

FWIW, I'm a big fan of wide CPU vector instructions. The real reason it was disabled is probably to push people like me to buy Saphire Rapids, which I would've favored anyway for the extra FMA unit. Although I'll also be waiting to see if Zen4 brings AVX512, which some rumors have claimed (and others have contradicted).


One of the failure modes for chips as they move to smaller process nodes is electromigration.

>Electromigration is the movement of atoms based on the flow of current through a material. If the current density is high enough, the heat dissipated within the material will repeatedly break atoms from the structure and move them. This will create both ‘vacancies’ and ‘deposits’. The vacancies can grow and eventually break circuit connections resulting in open-circuits, while the deposits can grow and eventually close circuit connections resulting in short-circuit.

Chips that get very hot are expected to be the first to show this sort of failure.

>In Black’s equation, which is used to compute the mean time to failure of metal lines, the temperature of the conductor appears in the exponent

https://www.synopsys.com/glossary/what-is-electromigration.h...


The high-end Alder Lake parts - particularly the 12900K - are terrible energy hogs. But as you move down the stack, efficiency improves a lot while performance remains very competitive with AMD parts in the same price bracket.


My 12700K has core power usage under 1W and package power under 4W when it's just sitting there, which is does a heck of a lot of. When it's running flat out compiling C++ projects on all performance cores, package power is ~90W. Single-threaded performance is much better than any Ryzen and even beats my M1 Pro. I'm not really seeing the energy argument for desktop/workstation usage. For datacenter TCO AMD is probably still the champ.


I guess I weight datacenter workloads much higher for perf/watt because the higher margins there is what funds next gen's R&D. Cranking up power to get perf under load is a move that cuts off funding streams in one of the most capital intensive industries.


Intel still sells 85% of the server market including 75% of the megascale cloud market, so at this point it does not appear to me that Intel has been strategically wounded. I'm sure they have more than enough cash to fund R&D at this time.


It was 98% of the datacenter market in 2017, and apparently the rate of decrease is accelerating, and that's even before taking into account that the datacenter chip equivalent to your 12700k doesn't come out until later this year and that's where you'd expect to see the real fall off in DC marketshare.

That money can dry up very quickly.


By no means do I imagine that Intel has everything right where they want it. Clearly, they'd prefer to be the "machine of the day" at Google, instead of AMD having that honor. But, it's also not the first time they've been in this position. I would argue they were in much more serious trouble in 2008 or so, when it was Itanium and NetBurst vs. Opteron, and everyone bought Opteron.


Intel still had leading edge node supremacy at that point, and everyone go caught with their pants down with the end of Dennard scaling. AMD simply lucked out that they weren't designing a core to hit 10Ghz in the first place (because what would become GloFo didn't have the node chops to even pretend that they could hit that in the first place even ignoring Dennard scaling issues). AMD therefore didn't have to pull slower mobile core designs off of the back burner like Intel had to which took time to readjust.

Intel's in a much worse position now. Their process woes aren't issues that the rest of the industry will hit in a year or so like the end of Dennard scaling was.


That may depend substantially on process node. 3nm would use less power.


Sure, but it's not Intel 3nm. AMD is moving to TSMC 5nm and will go to 3nm in the future as well. It really bothers me when either side compares tomorrows products to competitors todays products.


Sure, but so would AMD chips, which aren't even on 5nm yet.


Clearly Intel has huge issues or they wouldn’t be making such a drastic turn. For some people whatever happens to intel the picture is always extremely rosey.


They had profit of 20B last year. The bean counters are the reason they wont invest in cutting edge fabs not a lack of capital.


Sure, but does it sound more challenging to invest in a spin off company than an integrated arm that will deliver revenue from inside and outside business? I am sure the shareholders would be more agreeable to the latter. Especially when there is enough demand for chip making to be almost certain that it will not be a huge money loser.

To invest Intel's money in a spinoff fab company while losing strategic control over its direction and have little to gain from its success doesn't feel all that attractive to me.


I think they should pay what it takes to attract physics and EE talent from finance and facebook. Dont know what you are talking about with a spinoff fab.


Oh for sure - and that's kind of what I meant in my original post - years of lack of focus, incentives and interest in specialized education and manufacturing processes.


Having worked at Intel before, I can safely say that it's difficult to boil down Intel's problems to a single issue (e.g. chip manufacturing). Stagnant company culture, limited growth opportunities within the company, and a fanatical focus on selling CPUs instead of innovating are all problems that Intel currently faces.


The conversation is also muddied by varying definitions of success and failure. Intel isn't going to go broke in any foreseeable future but they may lose the premium margins they used to earn for having the best performing product. To the extent fab advantage drove that advantage (likely a big part) and that advantage is not coming back (because they no longer have a scale advantage) then maybe the premium margins will never come back. That's what investors worry about.


Limited growth opportunities seems like a big problem that can kill any organization.

Specifically if leadership gets ossified and there's no realistic path for ambitious young people to rise into positions of power and responsibility.

When this happens, the ambitious young people are more likely to vote with the feet and build a competing organization that they will actually have power running.

This is especially problematic when it happens in government. What are young people in America supposed to do when the political conversation can't break out of asking which of near geriatric octogenarian who had a childhood with rotary phones should be our next president?

Our culture seems to have lost the very important feature that there comes a time when you've had your turn at the helm, and now you need to step down and support new leadership.


I think Thompson says this (while talking about a series of lackluster CEOs) and says the topic of the "split" is a consequence of that, which has to be fixed (and is merely one of the problems)


The more I have seen of big tech companies, the more I think they need to split up from time to time. Complacency and ossification emerge as a product of scale. If you want to avoid them, reduce your scale. If your manufacturing engineers (really, the managers) need to be at the cutting edge of manufacturing or be out of a job, they will figure out how to get there.

Intel has had a number of embarrassing technology misses that have put them behind: the first I remember was through-silicon vias for 3D stacking and it was only a matter of time before they missed something necessary for an advanced fab, and they missed on EUV.

Their foundry engineers at the same time were recruiting more engineers out of school on the basis that they were the "best in the world." They thought their position was unassailable because they had high yield at 22 nm, so they rested on their laurels and missed technology after technology before it bit them.


There is little information available on what was actually going on inside Intel. All big companies have big problems and each single division or working group is not indicative of the company as a whole. You can find several people with good and bad experience painting all kinds of pictures. A single voice won't tell you the whole picture. Probable even the leaders of Intel don't really know exactly where they are standing.

Of course all companies go through cycles of explosive growth, ossification and renewal. Small companies often die because too much of the company goes sour at the same time. In big companies this can be more spread out over time and space and there's always enough left to rebuild the leadership from the inside and push out new profitable products in time.

That being said I don't have any indication that Intel right now is in a particularly good position. Nobody in the engineering area seems to feel too honored for having worked at Intel. Yet, they got 10nm finally working and continue to rake in money. Of great concern to me is that much of that money is paid out to investors instead of reinvesting it into the company. There doesn't seem to be a convincing plan of growing the business in the future. Also they did not admit that they have a serious problem at their hands for a long time. You just can't trust anything that Intel releases; it's just a rosy projection of the future when everything goes as planned.

Unless a group of senior Ex-Intel get together and tell their war-stories we won't know what actually was going on inside.


> The more I have seen of big tech companies, the more I think they need to split up from time to time. Complacency and ossification emerge as a product of scale.

This is highly anecdotal. Apple got to where they are with M1 by vertically interesting which wouldn't be possible if they broke up like you said.

Rather, in a pipeline the bottleneck determines throughout - Intel's bottleneck is manufacturing and that's what they need to fix or cut out. We have no reason to believe their design with TSMC chips won't best AMD it even apple.


> The more I have seen of big tech companies, the more I think they need to split up from time to time.

This is somewhat true of every industry. Industrial companies in particular are always playing the game of conglomerization vs. specialization. But I agree it will definitely be interesting to see this play out in Tech in the near future


I'm always confused when I hear reviewers say that Intel is back on top.

They beat AMD's benchmarks at ~double the power. I get it's winning, but it's not a fair fight. I certainly don't want my computer to be 10% faster at double the wattage.


The whole "double the power of AMD" meme is an oversimplification at best. It's all based on a 12900K running a heavy multi-core load with an increased power limit. This is the ultimate "I don't care about power efficiency" setup so it's kind of silly to knock it for drawing lots of power.

If you care about power efficiency you can drop the power limit and still be very competitive with AMD. The only reason you don't see Ryzen CPUs drawing the same power as a maxed out 12900K is because if you pumped that much power into a Ryzen CPU it would explode (at least without LN2 being involved).


Alder Lake is efficient at the low end; the 12100 and 12400 are really good against AMD.


For home use, that's fine. I don't worry about the power peak usage of my computer since it rarely gets to that level. For servers, that's a no-go.


I think there's another way to word this or spin this...

For home use, it's going to depend on the consumer. Some people want a chip that's easy to power and cool, and still gets 80/90/95% of the performance. Some people want the absolute best performer (for their specific uses) with less regard to cooling and power.


I think peak performance, even if it gets thermo-throttled quickly, is important. Web site JS parsing during loading is like that, and that's an essential function of my computer these days. And perhaps for bursty sections of AAA games as well.

But yes, it depends on the user.


Reviewers like a good comeback story. And intel is doing much better than it was doing before (I own machines with both types of CPUs). And intel did some different things with the new chips (2 different core types.), so interesting.

AMD hadn't released there new chips when some of that stuff was written. but its actually a little bit exciting that competition has returned and I suspect it will be good for consumers as long as both chip makers have competitive products.


IMHO this is just Intel hiding rather than solving its problems. It gives the appearance of management doing something but it's the wrong thing.

If we've learned nothing else, the big winners are at the cutting edge and vertically integrated. Splitting design and fab seems like a good idea: force fab to compete for design business. But middle management will take over and create perverse incentives where neither fab nor design radically improves.

I've heard tales of Intel being rife with politics, fiefdoms and empire-building. This is classic middle management run amok in the absence of real leadership. I honestly think the best thing Intel could do is fire everyone above the director level and start over.

Intel dominated when their fab was cutting edge and no one else had access to it. Splitting this means if their fab improves then everyone has access to it.

There's clearly an organizational problem here but this split isn't going to solve it.


IMHO it's the same pattern of the time when Intel gave up on their memory business and entered the CPU business. Citing from memory: "If you were the new CEO what would you do to save the company?" - and he immediately knew the answer - "Then let's go back in the boardroom and do that."

Judging from that it seems like their fab business doesn't have any long term future, the outside world just doesn't know it yet. Now they put on a show until they are ready to fully convert to other fabs. After all, the current fabs are still cranking out chips that make them billions each month.


> their fab business doesn't have any long term future

It has a fine future, just not in a 60%+ margin business. Intel should bite the bullet and spin out its fabs. Let it compete and grow as a separate company from Intel’s design units. Stratchery is spot on, as he’s been about Intel from day one.


Isn't this kind of thing part of what killed the PalmPilot?


- The PC is in decline (bad for intel)

- Intel lost the mobile platform

- Apple is moving to ARM, PC laptops to follow soon.

- AMD is eating intel lunch in the high performance x86

- Servers are starting to move to ARM

Last piece of the puzzle - Intel is not doing great at the fab level.

It doesn’t look good for Intel, they need to do a radical transformation fast.

Innovate now or die.


I agree with many of your points, however...

> - Apple is moving to ARM, PC laptops to follow soon.

> - AMD is eating intel lunch in the high performance x86

That first one really needs... a source? It's a guess, but far from a certainty!

The second one is partially true? AMD has greatly increased their market share and is much more competitive with Intel than they were 5 years earlier. They are not beating Intel in market share, but they are gaining on them. They recently had leads in performance, but it's not a one-sided race.


I agree, it’s not done on the server, it’s just that AMD was nonexistent a few years ago and now have share of the market. Intel was the only choice for top performance and were able to dictate the price at the high end.


The first is true to a degree if "PC laptops" includes the laptops kids use in schools: a lot of schools have switched to ChromeBooks due to price and ease of management.


Stratechery is usually very clever, but here misses a key point: Intel falling behind in node size is the problem, yet TSMC is not a direct competitor to Intel. Sure, TSMC competes against the new Intel foundry initiative, but foundry is not the main profit center. For the most part, it's the companies that buy TSMC's fab time that are competing with Intel.

So sure, Intel buying TSMC time helps TSMC. However, TSMC would have sold that fab time anyway, and gotten the same scale fabbing (say) AMD chips. It's much more useful for Intel to buy production and hurt the actual Intel competitors* that would have contracted for that TSMC time while still making profits for Intel.

Intel will still have enough scale to still grow its own processes, and it can probably rely on some help from Uncle Sam to further tilt the scales.

I'm frankly surprised that regulators haven't interfered - in most other markets people would have noticed the anti-competitive move (a manufacturer 'starving' their competitors' supply).

* Apple excepted, since they have more than enough cash to outbid Intel. Then again, Apple doesn't bother with servers, so Intel can live with this. Also, the Apple ecosystem is different enough that switching has its own complex set of upsides and downsides.


For TSMC it is also good to have another big customer, otherwise they might become to dependant on Apple. Easier to get good pricing when they can say that we can sell the capacity to another big entity.


Other than AMD who actually competes with Intel (honest question). I mean for servers there is 686 descendants (Intel/AMD) and then ... err ARM on servers ?


If this analysis is even half correct those selling x86 server CPUs need to be concerned. Today there is a huge moat in existing x86 software, sure, but savings like that can justify many ports.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...

"Cost Analysis - An x86 Massacre"


AMD and also ARM on servers, like Amazon's Graviton.

Intel beating TSMC in node size isn't going to happen in near term even in Intel's best case scenario, and even if that were to happen - fabs like TSMC will always have enough scale to compete. So Intel might as well buy from TSMC and take over its rival's production. Every chip Intel buys from TSMC is a chip taken from AMD and a spare chip Intel fabs could make.


Does TSMC allocation work that way? Surely a client like AMD is paying up front to ensure capacity.


Well, everyone is paying upfront. It's competition for fab time, just in advance. Intel paying means AMD won't be able get the time Intel bought in the future.

Of course, AMD won't have zero capacity - even Intel doesn't want that - but I doubt AMD paid for all the time they'll ever need. For one thing this is a seller's market, if AMD had more chips people would buy. Also, AMD has a well-publicized chip shortage to the point they contracted with GlobalFoundries again (for the less important chips) about a month ago.


The company's willingness to tackle tooling to multiply the effectiveness of their employees was a key factor in their success:

This incredible growth rate could not be achieved by hiring an exponentially-growing number of design engineers. It was fulfilled by adopting new design methodologies and by introducing innovative design automation software at every processor generation. These methodologies and tools always applied principles of raising design abstraction, becoming increasingly precise in terms of circuit and parasitic modeling while simultaneously using ever-increasing levels of hierarchy, regularity, and automatic synthesis.

As a rule, whenever a task became too painful to perform using the old methods, a new method and associated tool were conceived for solving the problem. This way, tools and design practices were evolving, always addressing the most labor-intensive task at hand. Naturally, the evolution of tools occurred bottom-up, from layout tools to circuit, logic, and architecture. Typically, at each abstraction level the verification problem was most painful, hence it was addressed first. The synthesis problem at that level was addressed much later.


bottom line: Intel's edge is an technical debt.

Intel's own tools/process used to give them an edge over everyone else during the early years in the semis industry. Overtime, tool suppliers have catch up with Intel and Intel is falling behind due to their own tools delay/insufficient.

Intel's own tools is losing out to commodity and standardized tools


If Intel split the manufacturing unit, it would not be competitive unless it invested lots more in advanced new processes.


> if tomorrow morning the world’s great chip companies were to agree to stop advancing the technology, Moore’s Law would be repealed by tomorrow evening, leaving the next few decades with the task of mopping up all of its implications.

On the one hand, if this happened, it could be good for putting an end to the routine obsolescence of electronics. Then maybe we could get to the future that bunnie predicted roughly 10 years ago [1], complete with heirloom laptops.

On the other hand, ongoing advances in semiconductor technology let us solve real problems -- not just in automation that makes the rich richer, enables mass surveillance, and possibly takes away jobs, but in areas that actually make people's lives better, such as accessibility. If Moore's Law had stopped before the SoC in the iPhone 3GS had been introduced, would we have smartphones with built-in screen readers for blind people? If it had stopped before the Apple A12 in 2018, would the iOS VoiceOver screen reader be able to use on-device machine learning to provide access to apps that weren't designed to be accessible? (Edit: A9-based devices could run iOS 14, but not with this feature.) What new problems will be solved by further advances in semiconductors? I don't know, but I know I shouldn't wish for an end to those advances. I just wish we could have them without ever-increasing software bloat that leads to obsolete hardware.

[1]:' https://www.bunniestudios.com/blog/?page_id=1927


This is a "heads I win, tails I win" strategy. Give the fab business a chance to be independent and justify its massive capex and lower margin profile. Increase utilization of nodes to squeeze as much cashflow out of them as possible. If it continues to be a worthwhile cashflow endeavor, fantastic. If not, split the company and fetch a high price for a sophisticated and capable (if not #1) semi fab business.

Let the higher margin chip design business utilize whatever manufacturing process is most effective for its latest and greatest chip designs. Chip design and development won't be hamstrung by setbacks in manufacturing. Intel's scale will ensure most optimal pricing and deal terms with TSMC (if INTC doesn't use its own fab).

Along the way, there is additional upside optionality from government subsidies and support for a critical piece of infrastructure in the 21st century. Domestic fabs will be (if they aren't already) as high priority and critical to US national defense as the military hardware contractors (Boeing, Lockheed Martin, etc). That also ensures a floor to the price of the fab business should Gelsinger decide it should be sold.


This was the same rationale AMD had when parting ways with their Global Foundries business, and I'd argue just as much a move out of desperation. What's different are the politics.

United States domestic national security concerns make it much more financially prudent for Intel to pursue the same split. AMD took a gamble, Intel is likely to have more of a "sure thing", they have the capital backing them to make a competitive foundry business, if not the federal government will make that capital happen.


Also with their packaging tech (EMiB and Foveros) they can mix and match chips from different fabs - ie, combine P-cores from TSMC fabs with E-cores from their own (possibly older) fabs. Alder Lake is the first real stab at the "chiplet" architecture, and it seems to be a very good outing. Future iterations will only improve, and at Intel's purchase scale economics they'll be able to manufacture chiplet designs that match market needs for the highest margin or pricing pressure to squeeze competitors, if needed.


Problem is the less product Intel pushes through its own fabs the less investment they will get and the more they will fall behind.

This feels like a defensive / short term value maximisation strategy - and might be the right one in terms of total value. It doesn’t feel like a long term vote of confidence.


>less product Intel pushes through its own fabs

that's the problem the article mentioned. Intel is an IDM. they developed their own tools/process and it gave Intel an edge during the early years of the semis industry.

however, it look like the industry catch up to Intel. Intel's own tools/process is no longer enough and in fact it became an technical debt.

Intel split into two allowed Intel's foundry to use standard equipment/software thus no needs to waste money on building its own tools.


Why can’t they migrate to industry standard tools whilst keeping their business in house?


no conflict of interest with Intel's foundry customers.


Not sure I understand your point here. They will need to migrate to industry standard tools for foundry customers anyway.

Seems to me that planning to place more business with TSMC is essentially an admission that they don’t expect to be competitive with TSMC in the near future. And Gelsinger has more visibility on this than anyone.


>They will need to migrate to industry standard tools for foundry customers anyway.

foundry is about trust. lets say AMD want to use Intel foundry, how can you be sure the Intel's design side won't take a peek at AMD's x86 design. your fabless customers spend million on design and their design is their life blood. how they going to trust you if you also sell semis.

example: Samsung used to make chip for iphone. Apple ditched them as soon as Samsung compete with Apple w/its own smartphone.

TSMC have over 50% of the market share because they only do one thing and one thing only.


Intel can already peak at AMD's designs, just take the cover off and dissolve off the layers one by one. It's probably about as helpful getting your competitor's source code; figuring out what's going on probably takes longer than figuring out how to do it yourself. Maybe worse, since everything is so low-level in hardware, it'd make assembly look like a high-level language. I'm no hardware designer, but I expect that the results of tape-out are roughly the same as the results of etching off the layers: they need to create masks for each layer, specify the composition of the layer, etc. And then after understanding it, you still have to implement it and get started fabbing it. So I'm not sure that fabbing your competitor's product is a huge risk.

I think Apple stopping using Samsung is more related to Apple's higher-level issues: why do business with someone you accuse of violating your design patents. Not because you think they'll copy your IP, but out of principle. There's no IP-related reason Apple needed to stop buying screens from Samsung; Apple has never manufactured screens.


> example: Samsung used to make chip for iphone. Apple ditched them as soon as Samsung compete with Apple w/its own smartphone.

Apple was dual sourcing as recently as the A9 on 14nm. The reason they stopped was because Samsung fell behind the leading edge in performance and efficiency.


Imagine what could have been if Intel wouldn't have sold XScale back in 2006 and kept focusing on ARM


ARM is not end all and be all. x86 still have a big market share


That's one way to put it! I'm not finding any comprehensive sources, but here's an example:

https://www.theregister.com/2021/11/12/apple_arm_m1_intel_x8...

> Arm's market share in PC chips was about eight per cent during Q3 this year, climbing steadily from seven per cent in Q2, and up from only two per cent in Q3 2020, before Arm-compatible M1 Macs went on sale.

ARM is very much on the rise, but also still in the single digits, leaving plenty for x86.

https://futurecio.tech/intel-losing-share-to-amd-and-arm/

> 5% of the servers shipped in the third quarter of 2021 had an Arm CPU


What exactly would have been? There is no inherent superiority in the ARM architecture. Hell, 2006 ARM is a trash architecture. They just happened to be the only vendor that cared about that niche, and then the niche stopped being niche.


How to fix Intel:

1. Lobby the US Government and public to increase public spending on manufacturing semiconductors in the US

2. Corner most of those subsidies/funding

3. Done

Only partially joking; Why has Intel not made the over reliance of overseas fabs a natsec issue?


It has, now it's waiting for money from the CHIPS Act, which is now in the House.


I am pretty bearish on the the semi space especially after TSMC comments last week. I really think they’re expanding at the worst possible time. Understandably, this is largely due to politics.


I can't quite grasp why anyone would be bearish on the semi space. It's been non-stop growth for the past four decades and only appears to be accelerating.

What in the world is there to be "bearish" about?


I'm not the original poster. But I'm guessing that the reasoning is: there might be more capacity than needed so that would drive profits down. Cheap chips seem like a great thing to me, but I'm not selling them.

But at this point when there putting chips in everything its hard not to see the market expanding to meet increased capacity. (I think some greating cards have chips in them to control music/lights....)


No, he’s talking about the forthcoming dual invasion of Ukraine and Taiwan by Russia and China


Intel blew EUV, which fair enough, it’s hard as hell. Are there other problems, sure, but when you go from leading-edge process technology to a two node lag, you’re fucked either way.

This article is rambling and over-complicated.

For a much more insightful and compelling view into Intel at its greatest I recommend “The Pentium Chronicles”.


Reading this, I had the uneasy feeling that we might be seeing a company chase a past that cannot be its future.


I don’t get it - who would use intel as a fab? How would they competitive? And trustworthy in the sense, not doing competing designs in house?


I think that's the wrong question. Who would use Intel as a cutting edge fab? Unclear. But plenty of people would love access to a fab for their non-cutting edge needs.

The risk with this is that Intel will become a fab for older, less expensive chips, while TSMC and others gobble up the top tier stuff.

But the world needs more, not less, fab capacity, and there are plenty of people would gladly use Intel for their needs if the opportunity was there.


Additionally, only 23% of TSMCs revenue comes from their most cutting edge 5nm process as of the most recent quarter [0]. Everything else is 7nm or larger, which Intel is capable of doing today. In the scenario where Intel remains a major node behind TSMC, there is still a lot of fab business they could pick up.

[0] https://twitter.com/IanCutress/status/1481581119740989440/ph...


> who would use intel as a fab?

We've got a very serious shortage of fab capacity right now that will probably last for at least 2 or 3 more years. The answer would be anyone who needs fab capacity of the sort Intel can provide (not likely AMD of course, though I could see Apple giving it a try).


Apple buys screens and other components from Samsung (potentially giving them an early look into the next iPhone's form factor, visual design, and specs); the supply chain is built around trust.


Well first off, Intel themselves; they sell millions of chips a year. Given the current chip crisis, after their own capacity they would sell off the rest to the highest bidder. I don't think they would be shy of customers, except for direct competitors. But keep in mind that CPUs represent only part of the semiconductor market.


Microsoft? Maybe another phone maker who can't secure manufacturing with TSMC? They will clearly be tier II for a while, but I think there's enough chip design companies out there vying for manufacturing capacity.


People who need to make chips?

It’s not like there is an excess of capacity right now, and there are only a small number of players that even offer what intel has available.


Apple will ditch TSMC if someone else offer better node and cheaper price.

there is no loyalty. Apple will ditch you if they can get cheaper price or you are competing with them like Samsung.


Those who want to manufacture their chips locally (US or EU).


Intel cannot and will not catch up. The arrival of the M1 chip is analogous to the arrival of the monolith in 2001 A Space Odyssey. It’s efficiency is frankly alien technology.


I love my M1, but it is most certainly not alien technology that can’t be replicated, nor does it imply that other foundries/chip designers can’t catch up. In many respects AMD is already very close (on a previous node, no less), and Intels 12th Gen, while very power hungry, is very good in terms of total performance (obviously not competitive on perf/watt, but it was never going to be, because of the node difference).


Are you saying that AMD is very close in perf/watt? If so, that's pretty cool.


Sorry I should have been clearer. As far as I am aware, the M1 is still better at perf/watt (again, comparing TSMC 5 to 7, so it’s not unexpected) - for laptops, the M1 also has an extremely low idle power draw that I don’t think anyone else can match right now, which significantly improves battery life. I meant comparable single-threaded performance in a laptop form factor.


You can get better performance than an M1, but you are not getting anything else with the performance and battery life. Except maybe the M2.

If you want an actual ultra portable, you are getting a Mac.


If you want a Mac.


Well, if "ultra portable" matters more than "Mac vs non-Mac", then get a Mac (at the moment). If you want "not Mac" more than you want "ultra portable", then don't get a Mac.


If you don't you have a wide selection of other computers, but none of them really qualify as ultra portable.


> It’s efficiency is frankly alien technology.

That's probably an exaggeration, and this exaltation of Apple is a Bad Thing. The M1 had good designers - two of which have moved on already, one to Microsoft [0], and the lead designer to Intel [1] only days later.

[0] https://9to5mac.com/2022/01/12/apple-silicon-microsoft-poach...

[1] https://9to5mac.com/2022/01/06/apple-engineering-director-m1...


Why do you say that? They're already responding to ARM with their big.Bigger architecture in their latest generation.

I'm sure people said the same thing about Intel when AMD introduced 64 bit procs or Sun introduced multicore processors. Intel has adapted and lead the field many times after being overtaken. No reason to expect them not to do the same here, or at least compete.


Not sure ‘already’ is appropriate here. big.LITTLE was announced in 2011. Taking 11 years to copy a competitor’s successful feature must be a record of some sort.


Not sure you understand the point of big.LITTLE. It's about dealing with power and thermal constraints in the quest for more performance [1]. So was the transition to multicore processors, BTW [2]. These are the things hardware companies do when they have no other option. And as [2] points out, we software folks still don't have a good way to deal with it.

[1] https://armkeil.blob.core.windows.net/developer/Files/pdf/wh...

[2] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-...

Alder Lake is about dealing with power and thermal constraints. Intel has finally pushed performance so far that they need to do this. The chips are benchmarking so well because of this move, not in spite of it.


I do know what big.LITTLE is about thanks. No idea why you’d think otherwise from my post.

Of course Alder Lake is benchmarking well because of it.

My point was ‘already’ makes it sound like Intel is rapidly adopting this technology - not 11 years after Arm.


M1 only runs on Apple hardware. Hardly matters to Intel customers.


That’s true to an extent but no one lives in isolation. Losing Apple was a pretty massive blow.

Still the real blow will be if someone successfully enters the server business. Graviton* are interesting but that’s not a broad threat yet.


I think it'll be another huge blow if Windows users with laptops move almost entirely to ARM over the next 3-5 years. While Intel may have an absolute performance advantage in the desktop and server arena, most Windows users these days are using laptops, where ARM's energy efficiency matters more.


First Microsoft needs to sort out their Windows on ARM story.


if my understanding is correct then M1 was a one time boost due to the massive decoder and on chip RAM, it's not going to be constantly getting faster


That’s basically it. Apple went big because the target process allowed it. To someone out of the ASIC industry, it is a set of interestingly scaled but pedestrian design choices coupled with some exaggeration (realizable memory bandwidth per core on the Max is a fraction of what you’d expect based on Apple’s aggregate memory bandwidth claim for the processor as a whole) and a very serious investment in performance/watt (legacy of the phone use case).

The Max has barely any improvement core-wise than the first M1. It’s going to be interesting to see what the real next generation looks like.


> The Max has barely any improvement core-wise than the first M1.

Isn't that what you would expect in two chips that use the same core design?

There is more memory bandwidth to the Max, and the system level cache is larger, so there are differences outside of the core, but the core itself didn't change.


The M1 also has old cores from the A14. The A15 isn't hugely faster than the A14 but it clocks faster and has other efficiency tweaks. Clock for clock, the M1 is slower than the iPhone 13 Pro.


>one time boost

That's not what you would expect if you look at the graph of six years of year-over-year SPEC performance gains on iPhone cores. Their history shows a pretty reliable 20% gain per year.

>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: