Hacker News new | past | comments | ask | show | jobs | submit login
Apple M1 – a seriously fast x86 competitor (arstechnica.com)
139 points by tikej on Nov 23, 2020 | hide | past | favorite | 113 comments



Have been waiting for years for an ARM-based laptop, I wouldn't have ever thought that the x86 emulation would be this good though.

Apple seriously deserves huge credit for this, after both Microsoft and Google were screwing around on arm for years doing nothing useful.

I do hope both Microsoft and Google come out with competitors, though, because it'll help the market massively.


“Binary translation” to be pedantic, but, I agree.

I was an Adobe user through the PPC->Intel transition, and that worked OK, though my subjective remembered experience says non-x86 apps felt a bit pokey. (This was a while ago, and I could very well be wrong, but we had PPC and Intel iMacs at my college newspaper, and the G5 felt faster in my memory.) But that was also a transition to a dual-core Core 2 Duo machine, which was obviously unambiguously faster than a single-core G5, so there was computational room to spare to deal with the translation.

It was not a done deal that x86->ARM would as smooth of a transition, as we figured ARM would be a bit faster, but not a generational leap. But here we are, Apple really managed to make binary translation work well this time, and Intel applications are plenty snappy. Which is impressive. (I also wonder how much of that is having a really fast SSD so the translation of the app is not as I/O bound.)

And I don’t think legacy x86 apps will be as much of a thing this time around. Back in the PPC days, lots of applications were built using Metrowerks CodeWarrior toolchain, which didn’t come across the divide, so big applications had to scramble to get everything into Xcode. This time, if an application made it past the evolutionary bottleneck of needing to be 64-bit only for Catalina, it’s probably close to ready to go for ARM.


Note: It's not emulation but ahead-of-time cross-compilation. This has some limitations, most notably x86 VMs are not possible.

From my POV, the future of business laptops could be such powerful ARM macs around 1000-1300$ (i.e. quite low priced) coupled with cloud based Windows 10 instances to get all software running. This would then create some pressure on software developers to port over their software at least to Windows-on-ARM which could feasibly run in a local VM.


It’s an emulator too though, right? Otherwise things like Chrome (which has a Javascript JIT) would never work in Rosetta.


Rosetta compiles JITted code ahead of time, once the OS detects the code exists.

From what I understand, the OS enforces W^X (memory pages that are writable cannot be executable). That means the OS can use the “set page read-only and executable” system calls to detect the existence of JITted code.


Pretty sure if it’s done ahead of time we can’t really call it “JIT” can we? :)


Rosetta does work with JIT'ed code. I think the way it works is that any code in memory gets the Rosetta magic applied as soon as it is marked executable.


So it's sort of adding latency, as a trade-off for speed (once it starts). That makes so much sense and seems so obvious in a 'huh, probably never would have thought of that' sort of way!


I think what get's cross-compiled is the Javascript JIT-Compiler itself. (The V8)


That wouldn't do much, because the JIT would still be producing Intel code, no?


See other answers, it does sort of a JIT cross compile once a page is marked executable.


Hmm, I guess you have a point.


I wonder if this ahead-of-time cross-compilation is a way of avoiding licensing issues? It seems like one cannot just make x86 CPU without Intel's agreement, but I couldn't find a source explaining this well. I also can't see why someone cannot do a clean room implementation of the instruction set. Some people wrote to me that it may be to do with patents on some instructions, but I wonder how valid such patents are if they describe an obvious way to do a certain thing (I have not seen the patents, so this is my speculation only).


I think its just that it works better for most applications, where you run them multiple times. JIT is going to be slower under emulation anyway. Also note that Rosetta uses the native libraries where available, so as soon as you call into say graphics libraries that is running native, not emulating.

(I believe most of the relevant patents just expired; they don't emulate the vector instructions that are newer).


Rosetta 2 is not emulating an Intel CPU. It recompiles the binary to ARM when you run the app for the first time. It also does on-the-fly compilation of JITed code, too.

Actual emulation would be much slower.


1000-1300 sounds like a lot of money.


The key word there is "business" - most laptops geared for corporate (in PC world at least) are the thin and light metal chassis ones. They are easily $1200 for "pretty good" hardware but a sleek look and feel. But the same (core component) hardware in a budget laptop might be $500-800. Business laptops with best in class hardware are often over $2000.


Compared to what?

$2-3k for a dev laptop since forever, innit?


Maybe a few years ago, but you can get a fairly beefy 8c16t thinkpad for less than $1300.

[1] https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-ser...


I remember $6K for dev laptops in the mid 90s.


How fast is the ARM used in the Windows computers compared to the M1? I would think that if it had the performance of the M1 that we would have seen Windows RT more widely accepted by now.


Does Windows have a Rosetta equivalent? If not that might be the answer to why it’s not widespread


It did not in the original Windows RT release. Recent versions of Windows 10 on ARM have x86 (32-bit) emulation, which is a different approach to Rosetta, but equivalent in allowing older software to run. Beta and future versions of Windows 10 on ARM will also have support for x64 emulation.


Yes you run x86 apps but not x86_64 on Windows for ARM.


Oh right, so that means any JIT that expects to generate x86-64 code and run it, can't. Maybe some exceptionally flexible jit implementations run (if they detect which arch to use at run time - sounds very unlikely).


I wonder if this is actually new technology at all, or if it is literally Rosetta 2.

QuickTransit, the technology behind Rosetta (1), was also licensed to many vendors for <x arch> -> <y arch> translation. The POWER version was called "PowerVM Lx86". It seems to have achieved similar performance to Rosetta 2, but was discontinued in the early 2010s.

The wikipedia page for QuickTransit has, since at least 2015, had a totally unsubstantiated claim that "most" of the original QuickTransit team now works for Apple (and ARM)[0]. Makes me wonder.

[0] https://en.wikipedia.org/w/index.php?title=QuickTransit&oldi...


This time around it’s not a pure software solution. ARM has a significantly weaker memory model than x86, which makes translation generally difficult and/or slow. It appears the M1 natively supports the x86 memory model which makes the rest of the task substantially easier.


This actually isn't new -- POWER processors have this exact same hardware support for emulating x86. It was even present until (the unreleased) POWER10.

POWER chips up to and including 9 support "SAO", Strong Access Ordering mode. It is mentioned in exactly one sentence in the CPU manual: "The POWER9 core supports the SAO mode defined in Power ISA."

This feature appears to solely have existed for QuickTransit's translation software, as POWER has a weaker memory model than x86.

Linux briefly mentions this in the man (2) page for mprotect, stating:

  PROT_SAO (since Linux 2.6.26)
    The memory should have strong access ordering.  This feature
    is specific to the PowerPC architecture (version 2.06 of the
    architecture specification adds the SAO CPU feature, and it is
    available on POWER 7 or PowerPC A2, for example).


What specifically made you wish for ARM? Is there something specific with ARM which is not possible in X86?

I personally looking for factors like: low Power, high Performance, snappiness and 4k display support.

I have no real wish for ARM because it doesn't matter what architecture i use if it is compatible to what i need.

If Apple would have brought out M1 x86, it would also have helped the market.


It broadens people's mind. Especially software designers and low-level programmers.

There's a lot of software (esp. low level libraries, scientific applications, etc.) which is optimized (sometimes hand-optimized) for a specific architecture. Not every software is released with "compile -> test -> release" cycle.

Some resources on a CPU is extremely limited (e.g. dividers) and using them is expensive. Alternative architectures changing these balances or being more efficient in some workloads is game-changing. This is what GPUs did for AI, ML, Simulation and Maths domains.

e.g.: Eigen's speed increases 300% when compiled with optimizations enabled. Especially its solvers show this performance shift.

So, if ARM can change these balances and be more efficient under some workloads, it can carve a space for itself besides x86 and GPUs. This is why Apple's efforts and M1 is so important.

When there's only one architecture (x86, CUDA, etc.), innovation stagnates, people's imagination shrinks.


The M1 mac isn't the first ARM based PC, and it's not the first popular one. Most Linux users have had plenty of ARM based PCs for a while now, often (at least for me) they've outnumbered the x86 based ones. Most distros and OSS have dealt with the issues on arm already.


> The M1 mac isn't the first ARM based PC, and it's not the first popular one...

However, it's the first popular and heavily backed ARM CPU which can compete with high end x86.

> Most distros and OSS have dealt with the issues on arm already.

I also use a lot of ARM based SBCs for a long time but, running a scientific application fast is not the same thing as booting Linux or making it reasonably performant/stable.

My day job and personal research is involved with supercomputers a lot. So I live in parallel applications / architectures. There are certain scientific applications which run considerably faster in previous generation hardware when compared to current-gen ones, because they're heavily tuned for that generation's memory subsystem latencies and IPC characteristics (fetch, retire capacities and pipeline latencies to be exact). This sometimes implies that the code contains proper hints or hand-tuned asm for speed-critical sections. Eigen is a very good example of this optimization. It optimizes itself during compilation by compiling the architecture optimized paths with hints in the code and help of the compiler.

So, having an ARM CPU which can surpass x86 will make these communities to take note of it. I'm not sure how Eigen performs on M1 for example (will see soon enough), or other software packages which are optimized for x86 extensively.

These software packages will start to get optimizations for M1 or Apple silicon since it's pretty popular amongst scientists due to its POSIX nature and hardware quality.

At the end of the day; yes, it's a glorified Raspberry Pi, but with looks and performance of a supercar, so it'll attract a different kind of user base and development effort.


There's 'popular', and then there's mainstream. It looks like AS is set to become mainstream. A mainstream ARM desktop platform is going to be revolutionary in that it will force mainstream commercial developers to include ARM compatibility.


I always wanted ARM too because x86 is of no interest to hobbyist using e.g. microcontrollers. I like learning low level stuff but never felt like it was worth the investment to properly learn the oddball x86 assembly code. I have an Amiga background where I did some assembly coding on Motorola 68000. I absolutely hated the x86 hardware platform when O switched over. It felt like everything was a kludge. Too few registers. Weird segment registers etc. ARM looks much nicer and you can use same instruction set on tiny micro Controllers.


x86-64 is much nicer to work with than 32-bit x86, but I agree.

I hope that AMD comes out with an ARM offering. I would love to see ARM PCs.


So the assembly code is different?

Is it easier, more logical?


It is useful to differentiate between easy and hard as well as simple and complex.

x86 has instructions which can make many thing “easy” But they are not simple.

ARM is far more logically consistent. x86 is really messy and complex but often have lots of convenient instructions.

So I would say ARM is easier to learn and it is easier to learn ARM code as everything by is more logical, regularized and consistent.


Microsoft's x86 emulation on ARM is nothing like this, which is what is so surprising.


Indeed. I'd happily buy an ARM based laptop but I just can't get go of my legacy software.


I'm personally both impressed and surprised by how well this M1 chip performs. But one thing most reviews omit to mention is that it has a fab process advantage compared to the competitors, especially against Intel. And going from TSMC 7nm (which is the latest line of AMD CPUs are using) to 5nm can bring a few tens of percentage points in power consumption and/or performance improvement.

This of course doesn't take anything away from the design, as clearly Apple has had a hefty lead against other smartphone SoC designers in the smartphone space for years.


I suspect they do not want to mention it because fab process advantage isn't easy to understand as they're not all the same. For an example, TSMC's 7nm process is more comparable to Intel's 10/14nm process [^ 1]. It's not clear what TSMC's 5nm process is comparable to for Intel's sizes.

As you said, design itself is key also. Look at how well AMD was able to optimize Zen 3 vs. Zen 2 and it is on the same 7nm TSMC process for ~20% IPC gain without increasing power usage too much.

It's going to be interesting to see what Apple can do for the next generation, which I suspect will be on 5nm again.

And Ars did mention it in this specific Ars review:

> Lastly, let's remember that the M1 is built on TSMC's 5nm process—a smaller process than either AMD or Intel is currently using in production. Intel is probably out of the race for now, with its upcoming Rocket Lake desktop CPU expected to run on 14nm—but AMD's 5nm Zen 4 architecture should ship in 2021, and it's at least possible that Team Red will take back some performance crowns when it does.

[1]: https://www.techpowerup.com/272489/intel-14-nm-node-compared...


*At all. Amd didn't increase power draw from 3XXX to 5XXX.


> But one thing most reviews omit to mention is that it has a fab process advantage compared to the competitors, especially against Intel.

This is mentioned in the article.


If you listen to interviews with Apple’s chip team, they talk about optimizing performance of the M1 based on how Mac OS works. An example is how they handle allocation and deal location of objects. Objective-C and Swift using this a lot for loops and for their reference count memory garbage collection. The M1 is 5x faster than Intel for those operations.

that kind of optimization between hardware and software can have a big impact and is part of Apple secret sauce.


Exactly! This is why I am holding for the next generations of CPUs and GPUs, that should be 5nm.


You might be waiting a long time. Hope you have a decent setup already.


One year? AMD moves to 5nm in 2021 according to leaked memos.


At best It will be Late 2021 and widely available in 2022. AMD normally use High Performance Node and Mainstream Node Pricing. And that is assuming there will be that many capacity left, as far as I can tell, ASML hasn't been keeping up with their shipment numbers. Possibly one reason why rumours are pointing to Apple using 4nm instead of 3nm in 2022.


What does moving to 5nm mean here? Starting to build stuff with 5 nm or releasing 5nm CPUs AND GPUs? Point is we don't know... and waiting for next gen is always an option -- what about next next gen?


How many years will it take before an Apple competitor can make an ARM ISA SoC that is anywhere near "x86-64 high end" performance? Is anyone trying currently?


Why does it have to be ARM? It seems way more likely that AMD will continue improving their Zen x86-64 cores to stay within the envelope. Their improvements have been very notable as well and the M1 doesn't humiliate Zen3s in the same way it does anything that Intel built.


And Zen3 doesn't even use TSMC's 5nm node whereas M1 does. Apple has payed a ridiculous amount of money to get all of the 5nm capacity for 2020, but it has definitely payed off when you look at all the hype.


Yeah, this is what is jumping out at me. Not that the M1 isnt extremely impressive but a near future 5nm AMD will be a better "apples to apples" comparison. Intel is being left in the dust though.


Because the amount of legacy baggage on x86-64 (naming continuing to allow x86 and even 16-bit instructions to run) is ridiculous.


Google is trying to make the open-source hardware space more mature. Hopefully that will produce some serious competition.

https://opensource.googleblog.com/2019/09/unleashing-open-so...

https://news.ycombinator.com/item?id=21633406


AWS's one seems fairly impressive. I wouldn't expect much from Qualcomm, who seem kinda moribund.


Graviton 2 single core performance afaict is about the same as the best Qualcomm has to offer currently (though obviously AWS is putting a lot more cores in a chip), and the M1 is about 1.8x-2x as fast as both of those (based on SPEC2017)


> Moribund: (of a thing) in terminal decline; lacking vitality or vigour.


There were few leaks suggesting Google might be working in their own ARM processor. Qualcomm is only other company capable to competing Apple SoC. They tried with 8cx(7 watt) but was not very successfull. With Custom X1 cores they may be able to achieve good performance.


I don't think anyone will need to worry about Google. As soon as they realize its actually hard, and as soon as they hit the slightest roadblock, they'll give up, like they literally do on every fucking product that isn't a huge instant hit that everyone fawns over.


Google is not really good at hardware, they tried high end phones but were not very successful. They realised that and moved to mid tier phones. More than performance they are focusing on creating a Google ecosystem where all their hardware and services integrate seamlessly.


I think their problem was they just tried to be too clever about the whole thing. If Google just produced a modern looking phone with top-tier specs and a nice clean Android install with no bloatware and some extra AI smarts, they'd be having trouble keeping up with the demand. Instead, they came to market with ugly outdated designs to support wizz-bang features that nobody cared about. I feel like they almost hit the mark with the Pixel 5 line, but for some reason they lost the confidence to stick a 8XX series processor in it.


> If Google just produced a modern looking phone with top-tier specs and a nice clean Android install with no bloatware and some extra AI smarts, they'd be having trouble keeping up with the demand.

Google had most of that (excluding AI) when they owned Motorola's phone unit. The phones had clean Android, respectable specs, inoffensively designed and very competitive on pricing. Unfortunately, Google had to unbundle Motorola, in part due to anti-trust concerns, as well as Samsung's public and private grumbling about the arrangement (Tizen, and a handful of Windows Mobile phones where promoted as a shot across Google's bow, in the same timeframe).


Yeah, as it turns out, making exceptional shit from the ground up is really really tough.

Hence the reason Google Fiber got laughed out of existence. Google thought everyone would just bend over backwards and laud their arrival. I guess they were shocked that it would require money and actual effort, like most difficult undertakings.


Contradictory evidence to your point is that they regularly design all kinds of data center Hw (famous for starting the whole paradigm of building data centers with consumer HW) and build best-in-industry ML accelerators (TPUs).

Don’t mistake building and selling consumer HW with building components. Very different problems. The former is as equal parts art as it is engineering. Building your own CPU will be vastly simpler. Whether or not Google has the same level of coordination between HW and SW to have Android benefit so greatly, I don’t know. It’s also unclear how big a step building the CPU is for Google. Lots of talented people on that project. That being said, it’s already behind schedule if I recall correctly so it legitimately could be causing them problems. Or they don’t have enough direction and good leadership to actually execute competently


I don't know - imagine a chromebook, but with a custom SOC with compiler optimizations designed to improve battery life on auto-play video ad popups. Seems right up their alley /s


I see Google moving from Advertisement model to Subscription based. They did same with YouTube.


Qualcomm gave up designing their own mobile cores after Snapdragon 820 (all 'Kyro' cores since then are lightly customized A7x's), and then the team got axed when they gave up on servers.

That said, when intel's mobile team in Isreal took their gloves off...


Companies giving up on wrong time is just so common. AWS has big plans for ARM in cloud, In long term i see ARM entirely replacing x86. For consumer market power consumption may not be a big deal but cloud companies running millions of instances it very well would.


A lot of it is that it's hard to design a modern application CPU. Unlike the others Apple was able to put it all together and had the advantage of tuning for their own software (i.e. the NSObject optimization)

Graviton2 uses an ARM-designed core and actually cut the cache sizes a bit from ARM's recommendation, and it's certainly competitive if not an absolute 'wow!' like M1 is. (Apple could have shipped at that level 2-3 years ago.)


For many worksloads the AWS Graviton 2 is already competitive. ARM currently points to a roadmap of 30% Performance Improvement YoY. That is inclusive of Process Node Improvements. I would not be suspired if Amazon reveals Graviton 3 in re:Invent 2020, and Graviton 4 next year with 5nm.

Many of these R&D will filter through to other ARM IPs. So it is perfectly reasonable to expect it would be available to Desktop as well. The problem is no one has the incentive to push ARM on Windows against Intel at its home turf.


I don't see why it would be hard. Iterating on a architecture is harder than starting one from scratch. If the software market moves to ARM and there's bigger opportunities on the laptop/desktop side, it will attract other players in the high performance market. Until now there was very little incentive due to the lack of software on mainstream operating systems (microsoft learnt this the hard way). Linux market on laptop an desktop has been way too small to attract big investments in the area. After x86 is not dead, AMD is competing pretty well with their Zen architecture. It will be interesting to see what the next iteration will bring in term of power consumption and performance.

I also wonder if being an SOC with integrated memory helps the M1 in term of performance (I guess it must improve data access latency). It might push competitors to build high performance SOC.


  > "Iterating on a architecture is harder than starting one from scratch."
Any reason to believe this is generally true? This seems like the same fallacy that leads to people believing that scrapping all the code and starting from scratch will make a project better, and typically seems to end in more-bugs and fewer-features with delays in release.

It can be true when the old code/design is just incredibly bad and needs to be tossed, but I don't think it's true in general.


When the iterations on a given project or architecture start hitting diminishing returns, trying new things from scratch is often a good strategy. I assume the commenter meant the quote for this context, but failed to make it explicit.


Scrapping the code and starting from scratch is often better than iteration. There are some famous examples out there which try to give the impression that this is a terrible idea but often one just gets half the story.

The devil is in the details. It is a question of execution and need. Who is doing it etc.

It is easier for a team that made the original system to make a better rewrite because they understand past mistakes better.

And entirely new team may simply replicate mistakes of the past.


It's true if you read it as "breaking backwards compatibility is easier".

Intel x86 chips would have had things like out of order execution many years earlier if they didn't need to be compatible with old software.

Some people have said that a lot of the 737 Max problems are due to Boeing execs insisting on keeping the same airframe.

I admit that isn't quite what the parent said, but realistically any architecture "from scratch" is going to reuse a lot of stuff from existing designs that works well.


Give it a few months, since the Cortex X1 should be competitive in laptops.


ARM claims X1 has about 30% better integer perf than the A77; given how the A77 performs in Snapdragon 865, that would mean the M1 is only about 45% faster single core than the X1.


We ran some benchmarks for it on Postgres, and well the results seem pretty in line that it's some impressive performance - https://info.crunchydata.com/blog/postgresql-benchmarks-appl...


So all of the compared intel CPUs are mobile chips, yet the AMD CPUs are desktop? That's not very fair.


I ordered a MacBook Pro. I haven’t bought a Mac computer since 2012. This computer has the same feeling of the original core duo MacBook I got. Right during a transition and that thing was a work horse. I plan to use this machine just as much and I’m just as excited as I was getting that MacBook as my first Mac.


Just need VM Fusion to catch up, a 16" MacBook Pro with more than 16GB of RAM, and I'm in!


With this I'm more excited about ARM on servers.

Microsoft and Google already have ARM machines but were not much successful. I thinks Apples success would convince them to be more serious and put more resources on ARM. ARM on server would mean even cheaper cloud prices (or more performance for same price)


IIRC Amazon AWS is already offering ARM based MySQL and PostgreSQL databases, and they appear to perform pretty well.


I would love to be a fly on the wall at Intel.


How does this work in practice? You wipe the OS and install Linux or Windows?

I'm not really kidding, any steps into Apple's ecosystem is dangerous given their poor track record on Customer Service, Privacy, hardware costs, hardware quality, security, and I'm going to stop and get back to work.

There are bigger problems to be solved, I don't think 1 expensive computer is going to change anything.


When intel manages to get fab process for 5nm ready, the performance (single core) will be significantly higher then ARM due to Intel's superior architecture. They're going to figure out a way, don't count them out yet.

I'd long Intel, it is over sold.


I don't know what happened to Intel. Whatever it was, it will be studied for years in Harvard Business School and the likes. They've been consistently a few nodes ahead of the entire industry for decades.

But, whatever it was, it was bad. They've been stuck basically at the same node for 5 years or more.

And, according to the article, their next CPU will still be at 14nm, which is completely nuts. There's no way you can compete with such brute force disadvantage.

It's remarkable to watch, but I wouldn't count on Intel getting their act together anytime soon.


> I don't know what happened to Intel.

I worked for Intel for a few years, 2005-2007. It's hard to put into words what happened to Intel. The simplest way for me to explain it is that the employees didn't take risks. A different way to explain it is that most people at the company were too career focused then focused on the actual business.

But this is how disruption happens! Major players like Intel can always see where the industry is going, at a high level, but the rules of each generation in an industry are always murky.

IMO, the "ARM" generation of PCs is a shift to vertical integration. Intel set itself up for horizontal integration, and the openness of ARM means that the lines between "Computer manufacturer and CPU manufacturer" are going to increasingly blur.

Intel's business basically treats PC manufacturers like franchises; and Intel's a culture focuses more on career, and less on risk and business. This means that they're going to increasingly have trouble adapting to the new rules that come in the "ARM" generation.


>A different way to explain it is that most people at the company were too career focused then focused on the actual business.

Every time there are market disruptions people come up with retroactive, "just-so" stories to explain the problems. In Silicon Valley its typically blamed on the MBAs.

The idea that a 50 year old company, a pioneer of the computing industry, with over 100,000 global employees has run into trouble because employees are "too career focused" is such a story. Is there any data that bears this out? How would you measure it? Are the employees at AMD naturally risk takers?

Intel made some missteps in an industry with a long planning lead-time. They may or may not recover. But the reasons are not so simple and concise.


"The Innovator's Dilemma," https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma, explains the situation very clearly. It's a pattern that repeats itself over and over. Intel hasn't been able to prepare itself for disruption from ARM. The graphs in the article clearly show how ARM is disrupting x86.

The book that I link to provides many examples, but it focuses on how hard drive companies struggled every time the physical size of the disk shrunk. Even when they knew "it's time to make a smaller disk," they'd do something silly and be non-competative.

In the case of Intel, when I interned in 2002, they knew mobile was coming and tried to set themselves up for it... But the rules of selling chips to phone makers are different than seeking chips for PCs. (Just like what the book explained in its examples.) In Intel's case, the margins were too slim in mobile... Which is a typical symptom of how a company gets disrupted.

> Are the employees at AMD naturally risk takers?

Don't confuse AMD (company) with ARM (architecture). ARM chips power all mobile phones and smart TVs. They're cheaper, and now are going to be better than x86 for PCs. (High performance, low power, low cost.) This is a textbook example of distribution. What you'll see are smart TV and phone manufacturers selling ARM PCs that are cheaper than x86 PCs, but perform better. Don't believe me? Just wait until someone realizes that all they need to do is swap Android TV out for Windows (ARM) in their assembly line for their smaller smart TVs, and throw a keyboard and mouse in the box instead of a remote.

(FWIW: AMD was never a threat to Intel. They couldn't manufacture in big enough volume.)


Thanks for that. If you can share, what do you think could be done at the higher level to prevent this?


So, I'm basing this on "The Innovator's Dilemma:" https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma

One thing they could do is invest in ARM manufacturers, and / or startups that are in the ARM business. As these companies grow and Intel shrinks, slowly let the investments eat themselves from the inside. For example, every few years one of Intel's fabs could be traded to one of their ARM investments. Later, as executives retire, leadership from the ARM manufacturers or startups become Intel executives. (Edit: This is what Disney did with Pixar.)

Another thing they can do is very openly admit that they are being disrupted. The CEO can basically dictate that they have to get into the ARM market and sell ARM chips according to similar terms as existing ARM manufacturers. According to "The Innovator's Dilemma," this approach is hard to do. (Edit: It's hard to do because institutional momentum and personal habits and assumptions are very hard to break.)

And, another thing is that Intel could start a business unit that behaves like a startup. It would get the majority of the CEO's attention; with running the existing business falling to other executives and managers. This new business unit would follow the ARM rules; and it would grow at the expense of the rest of the business. Intel might even need to sell their own computers.

Honestly, I suggest reading the book. It explains it better than I can. (I also suggest watching how major car manufacturers switch to Electric. GM and VW seem to be doing this.)


Thanks, I know the book. I understand being disrupted by something new but being beaten at your own game seems something else entirely.

It would't surprise me if the world moved from x86 to ARM despite Intel's better fabs. Companies being too good, too focused at their own business that they miss a fork on the road is a common tale.

But instead, they're completely stuck and unable to perform at what used to be their core strength.


I think they missed the boat with EUV and tried using existing tech to reach 10/7nm

https://www.tomshardware.com/news/tsmc-euv-tools-order

TSMC is making massive orders and seems to already be using that.

Intel hasn't even started - "Intel is also expected to start deploying EUVL systems sometimes in 2022 when it starts making chips using its 7 nm node."


Sure, but why? How have they become this blind and stagnant?

Their architecture has always been on the mediocre side but manufacturing has been uncontested for almost 50 years. What makes them suddenly hit a wall while everyone else plowed through?


If the "wall" was the current wavelengths (ArF) just don't cut it, then the competition has gone around the wall with EUV.

I have no insider knowledge but EUV is super expensive. It could be that the bean counters tried to delay one node too long.


Everyone knows what happened to Intel. They bet everything on their 10nm process - which has greater transistor density than TSMC's 7nm process - and they lost.

Intel flew too close to the sun and their wings melted.

The vitriol everyone has for them is based on the fact they could have been pumping out cheap 4 / 6 / 8 core chips for people, but instead held the entire world of computing at basically quad-core CPU level for the better part of a decade.

And I share that vitriol. I'm happy they're suffering and are in pain. I hope the same thing happens to NVIDIA. I hope the same thing happens to AMD. I relish when a company and when people are made to suffer when they hold back our technological process in the name of fucking quarterly profits.


> I hope the same thing happens to AMD.

Why AMD?

> I relish when a company and when people are made to suffer when they hold back our technological process in the name of fucking quarterly profits.

Apple is doing the same thing. This only helps Apple, it's a proprietary design that's not even standard ARM, they have extensions. And it won't ever be available outside of Apple products.


> Why AMD?

I think OP is saying that once AMD reaches the dominant market position that they hope some upstart comes in and takes them out of their comfort zone.


I don't think it's as clear cut as you make it sound. They've been through dozens of node transitions, some very challenging, to the point of pundits claiming it couldn't be done because physics (300nm comes to mind, turns out, masks interference came to the rescue, or something like that).

They also reinvented them selves as a CPU company, went for lower prices with the Celeron, bit the bullet on Itanium and embraced AMD's x86-64, backpedaled the Pentium 4 thermal disaster and went from Pentium 3 into the Core architecture, took the server and HPC market by assault…

Never have I seen them this lost and in denial.


Oh, you mean overall why do they fucking suck so hardcore?

That's also an easy two-part answer, look at their CEO and former CEO. Kraznich or however you spell his name, never knew a time (as CEO) when Intel wasn't top dog (and I'm not talking about 6 months of AMD being on top with Athlon 64), so he's never been hungry. In fact, his "dismissal" from Intel for hashtagMeeEeEToOoO is nonsense. They had been dating for years, before the anti-frat policy was put into place, and their relationship was cleared by HR; Brian got axed because someone had to take the fall for 10nm sucking so much donkey dick.

The new guy isn't even an engineer, he's an MBA. Anything Intel does under his leadership will be solely the result of everyone but him. Contrast this with AMD, where Dr. Lisa Su is an amazing engineer and also a brilliant businesslady.

Part two is culture. Intel is full of arrogant jackasses who are in denial about how awesome they are. Turns out AMD and Apple have legitimate badasses designing their chips, but they also have managers that aren't causing unnecessary conflict between different teams. This same shit happened to Microsoft years ago too, by the way... turns out causing a lot of conflict on your teams in the name of 'dA cReAm riSing To ThE Top!' is a fucking idiotic management style, which should have been obvious to these sub-geniuses if they'd spend more time in their biology classes - humans are a collaborative species.


Same as Boeing. The engineers in charge who knew what they were doing got kicked out and replace led by bean counters and MBAs. It is how most big engineering companies get destroyed. No big mystery really.

The Wall Street thinking is poison. Steve Jobs was terrified of this happening to Apple which is why he made sure product people had a lot of power.


This is hot nonsense.. people in the academic computing world and in chip/system design have been predicting this day would come all the way back in the 1990s. If you looked at anything down at the ASM level and lower it was pretty apparent what was going on.

There were plenty of RISC systems that were way out ahead of Intel that lost not because of inferiority but because they were being used for inferior business models.

DEC Alpha, Sun UltraSparc, etc.. they lost cause they weren't pumping out zillions of consumer level chips that ran windows. Then there are companies like IBM that just failed to capitalize on what they had or again kept their stuff super expensive and out of the big market for whatever reason.

I think lots of us new this day would come for x86.

A lot of this IMO is more about the ever decreasing importance of Windows. Windows used to be so much more important than it is today. Linux + the cloud taking over so much of server land and dev/science land has changed things dramatically. Then you have smartphones coming from the other direction. Intel ruled back in the day because more computers ran Windows than anything else and Windows was heavily tied to Intel. There's more stuff non-Windows OSes today than there ever have been. (Mac, Linux, iOS, Android). All those other types of computers not running Windows gave competitors the money they needed to compete with Intel.


Intel is not even ready for 7nm, Their 7nm is delayed till 2022. So fastest they can do 5nm is in 2023(Seems unlikely though when they couldn't even manage to do 7nm).

ARM is not only threat to Intel. AMD caught up with Intel with Zen 2 and surpassed in Zen 3. Zen 4 5nm is planned for end 2021. If they can manage IPC gains same as Zen1-Zen2 or Zen2-Zen3, Intel is in serious trouble.


Sandy Lake is an old architecture and with a lot of tweaks intel on single core it is still competitive. If they could fix their process or maybe just go with TSMC they should be able to become competitive. Its not like Intel hasn't been in this position before with Athlon 64. The one think they used to have was fab advantage.


Its not even about just pure performance, It is about performance per watt. Intel may catch up with performance but matching performance per watt seems very difficult.


I'd presume that a node shrink would lead to a significant decrease in power consumption.


Yes for sure it will, but to ARM levels is almost impossible.


They'll catch up. Besides they can skip 7nm as soon as their 5nm is ready. Intel has access to most of the same machines TSMC has.


Intel has access to most of the same machines TSMC has.

OK, but, where’s the beef? Intel has been stuck for a long time now. Do we know the management and corporate culture can catch up?


TSMC is shipping 7nm from mid 2018. When Intel ships 5nm, Apple/ARM is not going to stop development. M1 is their first laptop grade chip. Even if they get 10% IPC gain annually thats gonna make differece to big to catch up.


Does Intel has a dramatically superior architecture? Tough to tell right now, given the process deficit.

And anyways, is this transition entirely about performance? Apple is obviously selling it like that, but it seems like they really want the ability to fine-tune the architecture and SoC to their needs. And they’re not the only company with the desire and capital to make that happen; Microsoft, Amazon, Google, they all must desire on some level to finesse systems to meet their specific needs, and Intel isn’t in that business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: