Hacker News new | past | comments | ask | show | jobs | submit login
What Arm's CEO makes of the Intel debacle (theverge.com)
49 points by m463 5 days ago | hide | past | favorite | 55 comments






Intel has been hitting all (or enough of) the right notes for about 30 years. In our industry (perhaps everywhere) we have a constant tension between optimising in the current direction and do the same thing but better (CPU general, x86 specifically) and trying another approach. This has to be managed while the world around you changes requiring you to pivot into perhaps a completely different direction.

CPUs are still relevant. And x86 can still provide a lot of value and iterate forward. AMD has spent 20 years getting to the front. It is the short term vision that is killing Intel.

AI accelerators are going to be useful and valuable until the point they are commoditised just like CPU. Until then someone (Nvidia) will gain a lot of profit. They used this money to buy their way into the datacenter with companies like mellanox. Nvidia will be on top for a while. And then the cycle continues and we will see a new company on top.


In terms of performance, AMD spent around 10 getting to the front, 2-3 in front, then another 10 getting back in front. However until the introduction of Ryzen (and really EPYC) AMD hadn't been able to make substantial inroads into the market.

Laptops using AMD CPUs are at their most available and they're still lucky to be 20% of any given manufacturer's SKUs.

Intel has an opportunity to reclaim some of those areas with their more power efficient chips, but both they and AMD are facing more competition from non-x86 manufacturers than ever, and that competition is likely only to grow.

Apple's CPUs seemed to be a product of engineering without concern for backwards compatibility, e.g.: analyze current software, make the CPU do those things well. I wonder what Intel could produce if they came at a new line of processors with the same mindset, though that would require engineers to be in charge of the company again.


The last time they did that we ended up with Itanium.

Itanium was designed to be diferente for difference sake to build a most using patents to prevent competition/commoditization with a pinch of the second-system effect thrown in.

What is needed is a sober look and re-engineering.


Which we would be using today if AMD didn't torpedo Intel with AMD64.

That they’ve fallen behind in both fabs and processor tech is an indicator that something is deeply wrong. Boeing’s EVP of operations being on the board can’t have helped. This kind of deeply wrong smells like IBM in 1990. But without a second business line (mainframes) that generated a ton of money.

This article says a lot more about Arm than it does about Intel. A couple of sentences stood out:

> Arm is also rumored to be eyeing an expansion into building its own chips and not just licensing its designs

> if you’re a vertically integrated company and the power of your strategy is in the fact that you have a product and you have fabs, inherently, you have a potential huge advantage in terms of cost versus the competition

I don't get this strategy at all. Why is vertical integration of the designs and the fabs any advantage? Sure, there are probably small savings here and there from changing the design to suit the process. But like Intel you'd get fat and lazy and end up with weird tooling that only your one company uses. And you can't concentrate on being the best at one thing with huge volumes like TSMC.

Arm is way overvalued as it is, and now the CEO is going to try something stupid.


Having visibility across the value chain should be very helpful - you can avoid committing to undeliverable products and at the same time anticipate demand from your customers. Overall you should be able to run the same thing on much lower margins and be in a position to out compete anyone who doesn't have the same advantage.

Unfortunately I think that the temptation is to give in to the "faster horse that eats less" requests and to accept "we just can't do that boz" on the other end. China seems to be moving to a model that couples integration with bouts of intimidatory violence and so far that seems to be working well, but it does have costs.


Yeah that CEO doesn't understand the benefits of vertical integration.

The point of specialization and outsourcing is to amortize fixed costs among a larger customer base. If fixed costs were zero, then it would never make sense to outsource, since the effective marginal cost is identical in the non specialized company versus the specialized company. Every company would produce everything they need in house.

Fabs have enormous fixed costs. Any form of vertical integration is going to make your products more expensive, unless you specialize in some kind of underserved niche with no competition. TSMC will always have the lowest marginal cost.

What Arm should be doing is partnering with TSMC and other fabs to build optimized cell libraries for Arm designs and investing into software tools for SoC integrators and become a one stop shop for proprietary IP cores and custom ASIPs so people don't even think of talking with Cadence or Synopsys.


He's too nice to call out the real problem, which is hubris.

Intel got arrogant, remained arrogant and despite getting absolutely pummelled by competitors on share price value, believes it is special.

Intel needs to realise it's not going to catch up TSMC and so should focus on cannibalising all other competitors and moving into spaces where TSMC doesn't operate. It's going to be a lower margin game from now on but Intel can survive. Oh and be the nicest folks in town, not the shit on the people we need like Gelsinger did.

Intel is in effect a Will Ferrell movie character. Character is arrogant, becomes arrogant and stupid in defiance, and potentially finally sees the error of its ways and grows a bit.


For me, Intel needs to buy Oxide or Sidero.

And commit to building a simple, powerful, enthusiast/startup friendly platform for running mini-clouds at home/on-premise. And start bringing tech like Optane, Tofino etc to developers so they can get better price/performance than the soon to be ARM dominated cloud.


Didn't 'Gelsinger era' Intel abandon Optane and Tofino?

And listening to the most recent Oxide podcast [1] I can't think the Oxide team would want to stay long at Intel unless the new CEO took radical action.

[1] https://oxide.computer/podcasts/oxide-and-friends/2218242


Optane died because none of the cloud providers were really interested in it.

Which meant no developers could really develop for it so you're in a vicious circle.

It's why I think they need to have some way for people to easily use their tech at home/on-premise.


There is an interesting discussion on Optane on the Oxide podcast that I linked to. Long story short Oxide was interested but Intel's culture of secrecy got in the way. Not surprised the cloud providers weren't interested either.


Sure. But I want an Oxide for enthusiasts/startups/small companies.

I’d love an oxide homelab.

This is scope creep. Intel needed to release stable products. They aren't/can't.

ARM CEO will tell you otherwise, but RISC-V is the ISA that will be replacing x86.

Article is paywalled.

which means close the site and ignore the article. Or wait for some kind soul to post the archived version on HN lol.

I might get kicked out of the cool kids club for suggesting this, but, err, maybe we could use money in exchange for goods and se services?

I know! it's a crazy idea. it'll never work. but still.


Yep. It will never work when everyone wants you to commit to a year of hard to cancel subscription to read one article that is possibly lousy.

Especially since - see another comment thread here on HN - their "AI" thinks you're a regular reader when you click on them once per month.


The "service" of writing down things people have said out loud is pretty low margin. So your choices are to put a paywall up and hope people pay a flat rate for whatever you happen to offer each month or to increase your distribution as much as you can while embedding relevant static advertisements. Since delivery is exceedingly cheap it seems obvious which strategy to use.

> The "service" of writing down things people have said out loud is

Unclear why "service" is in quotes. Not many people get to be in the room where it happens. The people that do get to be in the room, absolutely do provide a real service to the rest of us who don't.


> Unclear why "service" is in quotes. Not many people get to be in the room where it happens.

You answer your own question. There's no reason at this point in history the "room size" needs to be artificially limited. Was the Arm CEO paid for their time? If not, why not?

> absolutely do provide a real service to the rest of us who don't.

It's service for Arm not you. If you wanted unbiased analysis of Intel's position you might seek someone other than a direct competitor. Which makes paywalling this all the more baffling.


https://www.theverge.com/2024/12/3/24306571/verge-subscripti...

> Our original reporting, reviews, and features will be behind a dynamic metered paywall — many of you will never hit the paywall, but if you read us a lot, we’ll ask you to pay.


It is funny, but HN wants reporting and publishing to be subscription based because Ads is evil. At the same time complaining about article requires subscription.

This is the second verge article I opened this month and I already hit the paywall

I didn't count, but if i ever open a verge article it's from being linked to on here. And the title has to be non click baity.

So... pretty sure i don't read verge a lot either. And I get the paywall.


The verge has seemingly been doing this a lot recently and it's been very annoying. It's not obvious that any given article will be paywalled or not. It would be nice if they at least marked them.

As per last tweets from them, it seem nowadays all of them, they are now subscription based.

I understand the change, eventually there won't be any publications left, if people keep refusing to pay for them, and use ad blockers as well.


Whether or not its paywalled depends on how much you read. You can get around most of the paywalls by going incognito or clearing your cookies.

I have not watched Intel closely for a long time. Seeing that its market cap is down 30% over the last decade is crazy. Considering how IT in general grew so much over the same time period. What happened?

To see how the Intel CEO is thinking, I now scrolled through a few interviews with him.

In this interview, Intel's CEO is asked to describe the difference of a CPU and a GPU:

https://www.youtube.com/watch?v=d07wy5AK72E

His answer is that the CPU is capable of doing general computing, while a GPU is made for very specific tasks.

I'm not sure if that is a good way to put it.

The way I would put it is that a GPU is a bit like an array of many CPUs. And to manage this array of processors all doing computation at the same time brings constraints and management overhead. So it is usually harder to program a GPU but results in better performance.


In a nutshell, Intel's board was focused on extracting value from the company instead of innovating. While this was going on, compute was undergoing a transformation as everything transitioned to cloud solutions. By the time they realized, it was too late and the rot had set in. Now the company is full of mediocre middle managers and principles who spend all day playing politics instead of working on real problems.

> Intel's board was focused on extracting value from the company instead of innovating

But then they failed at that as well. Their share price almost never increased after 2000s. Currently they are valued lesser than their 1996's valuation, even if you exclude inflation. Highest they went was $263B valuation in 2021, which is nowhere close to tech giants.


Intel is a dividend stock. You can't just look at share price.

    board was focused on extracting value from the company
From the company? Extract to where?

Their own pocket I'd wager.

A simple way to check if a board/CEO wants to extract a lot of future value _now_ is to look into their accounting, and especially how they count physical assets.

If they count physical assets at their current selling price, you can probably get in long-term.

If they count them at their buying prices you might have to worry.

If they do 'future accounting', tying into assets their estimated future added value, you can be sure the current owners are extracting the most they can _now_ and will try to drop the bag into futures investors (or ask for a government bailout)


What is the selling price of physical assets? I would think Intel is not a reseller, so they don't buy and sell stuff but rather buy raw materials and then transform them into a CPU?

And how would counting physical assets at their buying price move money from the company into the pocket of the CEO?


Let's say Company A buy a machine to engrave CPU 10 millions (no idea of the real cost), half of it with debt.

In normal/traditional accounting, you put the selling price of the machine (if you have to liquidate the company) on the side of the asset: let's say 8 millions the first year, and 5 million on the debt side (they bought it with 5 million of investment money, leveraged with debt). Company has 8 millions in assets, 5 million in debt: good ratio, healthy company.

Company B use future accounting: it buy the 10 million dollars machine. On the 'asset' side, you put 16 millions: the machine is worth 8, and is expected to earn the company a net profit of 8 during its lifetime. On the 'debt' side, you put 5. 16/5, the ratio is truly excellent. So good even that you can afford to take another 5 millions, and buy share/issue dividends. Now your asset/debt ratio is 16/10=8/5, same as Company A, except either your current shareholders have already reimbursed the initial investment, derisking themselves totally if the company ends up failing.

I think you have a Republican primary candidate who used that with his drug company, the company failed to produce anything of value, but only late investors lost anything.


Dividends and share buybacks, to shareholders.

Dividends primarily.

Buybacks

"Shareholders" meaning "executives".

exactly the same as every other F500 company...

Nvidia says otherwise.

The article sheds some light on what’s been going on. You’ll also be happy to know that the video you linked to is not a video of the current Intel CEO.

But he's the CEO under whom things went downhill, right?

I wouldn't say so no.

Personally I feel that the 5 years of reintroducing 14nm instead of getting 10nm out the door was what ate up any lead that Intel had over its competitors.

And don't forget that rumors has it that Apple reported more bugs with Intel Skylake than the QA team at Intel had managed to report. Never a good thing when one of your biggest customers finds more bugs in your product than your own engineers have managed to find.

I wouldn't really blame all this on a single person though.


I think you are right about CPU vs. GPU. It’s complex independent cores vs. simple array cores.

The largest part of that “overhead” is the need for much higher memory bandwidth and caching due to having many more cores to feed.

And the added complexity of programming is managing groups of cores (and multiple groups of cores) so they are all as active as possible without blocking each other algorithmically or due to memory misalignments, page read/write inconsistency, or cross interference.

With many more cores achieved with smaller cores: streamlined Arithmetic Control Units, less pipelining (?) and a simpler/shared Instruction Control Unit.

Somebody correct me if I am wrong.


The CPU for complex stuff and GPU for simple number crushing is a really popular narrative. It's true in one sense and nonsense in another, determined solely on the software it's running.

If your program has one or two threads and spends all its time doing branchy control flow, a CPU will run it adequately and a GPU very poorly. If the program has millions of mostly independent tasks, a GPU will run it adequately and a CPU very poorly. That's the two limiting cases though and quite a lot of software sits somewhere in the middle.

The most concise distinction to draw between the hardware architectures is what they do about memory latency. We want lots of memory, that means it ends up far from the cores, so you have to do something while you wait for accesses to it. Fast CPUs use branch predictors and deep pipelines to keep the cores busy, fast GPUs keep a queue of coroutines ready to go and pick a different one to step along when waiting for memory. That's roughly why CPUs have a few threads - all the predictor and wind back logic is expensive. It's also why GPUs have many - no predictor or wind back logic, but you need to have a lot of coroutines ready to go to keep the latency hidden.

Beyond that, there's nothing either CPU or GPU can do that the other cannot. They're both finite approximations to Turing machines. There are some apparent distinctions from the hosted OS abstraction where the CPU threads get "syscall" and the GPU threads need to DIY their equivalent but the application doesn't care. Threads on either can call fprintf just fine. It makes a bit of a mess in libc but that's alright (and done in LLVM already).


Thanks, that is a good summary.

> If the program has millions of mostly independent tasks, a GPU will run it adequately and a CPU very poorly.

Now I would say:

CPU - optimized to execute threads with independent instruction streams and data use.

GPU - Optimized to execute threads with common instruction streams and data layouts.

CPUs

As you noted: Optimizing conditional branches is one reason CPU cores are more complex, larger.

CPUs also handle the special tasks of being overall “host”. I.e. I/O, etc.

—-

GPUs

One instruction stream over many cores greatly reduces per core footprints.

Both sides of conditional code are often taken, by different cores. So branch prediction is also dispensed with.

(All cores in a group step through all the instructions of both if-true and else-false conditional clauses, but each core only executes one branch, and is inactive for the other. Independent per-core execution, without independent code branching. Common branching happens too.)

Both CPU and GPU can swap between threads or thread groups, to keep cores busy.

Both optimize layers of faster individual to larger shared memory (i.e. registers, cache levels & RAM).


>I'm not sure if that is a good way to put it.

His explantation is actually closer to the truth than your array of many CPUs. Because GPU isn't that. It is / was moving in that direction but it is certain not many CPU or Intel Larrabee.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: