I worked at Intel for several years. The problems are self made, through years of control in the hands of number crunchers, complacency, and arrogance. I sat through leadership meetings where the only strategy discussed was - we're Intel, they would be fools to not listen to us. There is talent but it is working with its hands shackled to its feet and keys thrown away. There is zero morale. During the years i was there, my memories are marked with bigwigs threatening people to leave with the offered severance lest they are forced to leave empty handed, while the bigwigs cashed out in tens of millions. Egos stoked at the expense of the company's future, short term metrics tied to substantial bonuses, that clearly paid a few top execs 10s of millions optimizing for those and leaving the ruins for others to clean up
There is something self-defeating about large tech companies. I wonder if this is fixable or if it is something that we simply have to accept. I don't know of a single large tech company that managed to keep their engineering spirit alive over the longer term, eventually all of them (even HP!) lose their way and end up being run in short-term, short-sighted mode which more or less kills them, even though they can stick around for many years afterwards on momentum alone.
I've been around a number of silicon valley organizations now, from large (30k+) to small (employee # 58). I've seen some good and a lot of bad over the years now - but it all seems to stem from one place: the sales organization. Now I'm not saying all sales organizations are bad, but in fact they do seem to follow the many of the stereotypes they've been cast into over the years.
Watching some of Intel's recent events seem to be driven by sales and marketing in a very tone-deaf manner. It's almost always been the case, in my experience, that sales leaders don't see competition. They continually talk about how what they represent is the best and you can't have a lucid conversation with most of them because they don't fundamentally understand what may make the competition better, or even fundamentals of their own product. It's black and white and many seem to just go all in on FUD to a ridiculous level, when forced.
When executives are far more focused on what wall street thinks than what their own customers want you know that the cancer runs deep. I continuously hear things like "our customers want a subscription model for our software because it's more flexible and is often cheaper!". Neither is, generally, true when it's being broadcast like that. It's a dark pattern (most notably in enterprise software / hardware) for a company to pull in long term revenue on a product that is, potentially, not evolving fast enough to warrant customers wanting to pay for an upgrade or new version. And a lot of SaaS just to "SaaSify" something, even though the customer could very well run it themselves, where they'd like, and be far better off.
It's just frustrating to have been in the industry as over 20 years now and to see even less inspiring, less charismatic and ignorant people continually failing up and driving companies down. I know C-levels that have lied, cheated and been fired for the former - yet they continue to land better positions, somehow? Some days it's beyond maddening to watch. But more often than not these folks are in some way, shape or form tied to the overarching sales machine.
Sales and consulting are two sides of the same coin.
Both bring in revenue, but neither creates product.
If you have leadership that allocates resources to boost revenue, without an understanding of why customers are paying, more and more resources and power accrue to sales and consulting departments. Eventually, this kills the company.
Above anything else, the lack of a distinct and political sales and/or consulting org is what makes startups startups. Most of the people fulfilling those roles are dual-hatted elsewhere, or at least sit close enough to people who are that they have a more holistic view.
> There is something self-defeating about large tech companies.
If you remove "tech" from this sentence I think it's still true, and it points you towards the solution: bias the economy towards small companies (for example, via progressive corporation tax). There's a bit of nuance here: I think a lot of the problem is with public companies. So we could probably do a lot by reforming the public ownership system.
It's not even companies. It's people! As they progress they become more risk averse and are terrified by the idea of losing their wealth.
That's why we have entire sectors of the economy focused on wealth preservation without volatility.
These people want a stable and steady climb upwards.
I think it's because there is an alarming lack of arrogance and self-confidence in the world. It's visible in every possible datapoint . People say those things are bad, but in reality they keep the system honest.
The best way to redistribute wealth is wealthy individuals overplaying their hand. It doesn't ever happen, there is something deeply wrong in Bill Gates being at the top of the Forbes rich list for 30 years.
His brain should have adapted to the new normal and compelled him to risk his whole fortune in the hope of making even more money, maybe becoming the first trillionarie. That's how wealth is redistributed; rich people becoming greedy and overplaying their hand.
It doesn't happen, rich people are happy being at the top and they are not ambitious enough to reach for the impossible milestones.
The only nut out there who does it is Musk but his wealth is not real yet, it's paper wealth.
> His brain should have adapted to the new normal and compelled him to risk his whole fortune in the hope of making even more money
Why do you think this should happen? That doesn't sound very rational to me. If you have more than you could possibly ever need then why would you choose to risk that? This is one reason why we should never reward people with these ridiculous levels of wealth in the first place.
> The best way to redistribute wealth is wealthy individuals overplaying their hand.
This sounds like a very wasteful and unreliable way to redistribute wealth to me. Unreliable because as you say it often doesn't happen. Wasteful because if it does happen it involves wasting enormous amounts of resources on a failed project.
IMO the best way to redistribute wealth is to not allow people to become as ridiculously wealthy as they are in the first place.
> If you have more than you could possibly ever need then why would you choose to risk that?
Because you don't simply do it for the stuff. You do it for the brain juices which are released when you win.
Bill Gates biggest wins happened in the late 80s and 90s.
His brain should compel him to do better than that to release those elusive juices once again.
The only way to do so is having even bigger wins than the ones in the 80s and 90s
> Failed project
The entrepreneur's failed project is the worker dream job where you get paid for doing nothing.
Kinda like the influencers hired by Bloomberg in his 2020 campaign. Nobody was checking if they were really doing the job and they just pocketed the money.
> Because you don't simply do it for the stuff. You do it for the brain juices which are released when you win.
Well sure, but those motivations exist regardless of any financial rewards. We are talking here about the additional effect of financial incentives. IMO the possibility of making say $1m or $10m is pretty motivating if you don't have much. But once you reach say $1B, financial incentives are unlikely to be as relevant to you. You're made for life, why should you do anything you don't want to do.
> the worker dream job where you get paid for doing nothing.
I think most workers would prefer to be paid to something useful/meaningful, unless they can literally go and do something completely different with no restrictions. But is rarely how billionaire's failed projects go. They usually involve a lot of people working very hard to no end.
> Well sure, but those motivations exist regardless of any financial rewards
The amount of the financial reward measures how much you are winning. The juices produced when you win 20$ and those produced when your leveraged long in biotech stocks make you 750k are not the same.
> They usually involve a lot of people working very hard to no end.
This is just a pessimistic outlook. Most stuff fails but the few thing that stick produce a big improvement.
And it's not just billionaires who face this problem. Government has an even worse success rate, but the home runs are paradigm shifts. The failures burn a lot, think about the CERN failure, that's 20B right there.
The ITER fusion seems to be the same thing and that's 50B
The ISS never recovered the initial investment made, that's 100B.
All of those projects have something in common. They are big and centralised. I agree with you that it's not just billionaires that have this problem, but billionaires are one of the entities that do.
> The amount of the financial reward measures how much you are winning. The juices produced when you win 20$ and those produced when your leveraged long in biotech stocks make you 750k are not the same.
I suspect the world might a better place if we gave fewer people who are motivated in this financial manner, and more people who are motivated by the impact of their work (of whom there are plenty, and they're just as good as the financially motivated ones).
> and more people who are motivated by the impact of their work (of whom there are plenty, and they're just as good as the financially motivated ones).
Nobody does the work for the sake of doing the work.
People work for a combination of financial reward and social status, and they'd use that as a metric to measure how much are they winning compared with the other 8 billion people on the planet
There is only a very very tiny percentage of people who are happy with just citations and the tepid approval of other academics
So if you roll back financial reward all the way down , then you have to pump social status all the way up.
It's going to be even worse. The cult of Steve Jobs and Elon Musk and Donald Trump would be the normal and would trickle all the way down.
And in fact, Bill Gates, Warren Buffett, the Waltons and all the people who have been populating the Forbes rich list for the last 30 years...they don't look happy.
They seem rather dull and desensitised to anything. Very robotic.
Very similar to the British royals and European old money.
They live in fear of losing their money and relevance instead of having the confidence to double down on their bets and hit yet another home run.
If you look at billionaires then it's pretty easy to pick the ones you'd want to be: Mark Cuban, Donald Trump, Rihanna, Cristiano Ronaldo....etc
Those type of billionaries who are big spenders and also continiously betting on themselves are the type of billionaires also good for society because they make the economy go around and their projects are cash cows for other entrepreneurs looking for easy oversells.
Can't imagine living in fear like Gates, the Waltons and Warren Buffett, they aren't human.
"we" are society deciding on an economic system and taxation regime.
"we" currently reward people with this amount of wealth generally because of a majority belief in one or more of the following (none of which are true in my opinion):
(1) They deserve this money, because either:
(1a) it represents a proportional reward for work put in
(1b) any money gained under a market system are deserved/earnt, and taxing those earnings is stealing
(2) That rewarding people in this manner (without limits) is the only way to motivate people to work hard, and that this ultimately produces better outcomes than distributing our economic output more evenly
Are you absolutely sure that you and Bill Gates live under the same economic system? Because I suspect that you describe some other economic system than capitalism.
There are definitely benefits to having large companies as well, though. Economies of scale are a real thing for one, and especially in the semiconductor industry the price of modern production facilities (>20 bn per fab) is well outside the reach of any company you could still call "small".
There are, but even in these cases there are economic downsides: notably with semi-conductors, risk if any of these big companies fail. I think that if the benefit of scale is so great, then you should be able to prove it by withstanding the higher tax burden.
I think it's that successful momentum can paper over internal business failures for a while, which means you lose a sort of natural selection where the worse employees -- and especially the worse managers -- would screw up in an obvious way and get booted out. And without that pressure around, people don't have to perform.
Basically, a company that's in an extremely strong position from prior successes may lose internal feedback loops that kept the people and culture on track.
Perhaps we need to be more aggressive with laws encouraging competition (or more accurately discouraging leaders of a market from getting too large a lead). It really does seem to be better for most involved (except for those that want to capitalize on enough success for short term greed that destroys momentum and talent pools).
What that actually means in a situation like this, where technology and production pipelines are years long and a misstep can set you back a decade or more (like it did for AMD, and like it's doing now for Intel), I don't know. I do know it sure seems like markets with at least three major players are more healthy than those with two, as we keep seeing again and again (iOS and Android being another recent example, if exhibiting a different problem).
The skills that make you successful at producing a product (the various types of engineering, product design, marketing, process analysis, manufacturing, the real work essentially) in the first place are not the same skills that make you successful in a corporate organization as an individual (politicking). As time moves on and a company becomes a money-printing machine because the 'real work' has already largely been performed the organization becomes host to parasites, or I guess you can just call them people that are better and better at extracting personal value even at the expense of group well being. I think its sort of inevitable due to finite human lifespans and non-transferable skillsets.
>There is something self-defeating about large tech companies.
It's all large companies. They think they are too big and entrenched to fail.
I've worked for 2 large non-tech companies where managers and leaders loved to say "We are X. We have the resources to do anything. If we apply ourselves, we are unstoppable." One of those companies tried to enter a new market and failed miserably because they kept trying to tell customers what they wanted instead of listening to their customers. The other company is a classic example of what people talk about when they make fun of "too big to fail" companies.
> "We are X. We have the resources to do anything."
Management will say this and in the next breath say that we either don't have headcount to alleviate an understaffed team or we don't have budget to retain some amazing engineer that they want to see how long they can underpay before they leave. I've lost track of the number of times I've told management about how import it is to retain certain individuals and how they absolutely should make sure those people are happy with everything within management's control, and then they ignored those warnings and then the talent left. These losses happen slowly and then suddenly when enough of the remaining talent looks around and realize that most of the good engineers have abandoned ship.
The problem comes from the sociological fact that groups differ from individuals in serious cognitive, sociological, moral and motivational ways.
And as a general rule, groups of people DO NOT SCALE WITH SIZE.
This is the fundamental reality that has been shown to limit the size of companies: the ability to reliably transmit critical operating information from one side to the other as needed fails with increasing size.
And computers do NOT improve things because it's actually a specific type of information required: knowledge applied with skill in a timely fashion. That is what fails with size.
Related to this is the inevitably "Chinese Whispers" losses that arise as the organization grows vertically and as decision making is centralized: the knowledge about the markets served inevitably goes 180 degrees out of phase where every direction and decision becomes exactly the wrong or worst possible. The system starts to tear itself apart destroying its competitive abilities and opening up large swaths of market for smaller competitors to take and occupy.
The real world has more information in it than what any planning or system can describe accurately enough to use for predictions. So without accurate feedback from the outside reality, the corporation will fail.
Big tech components can only be "fixed" by breaking themselves apart so they can regain the proper feedback loop accuracy, or by doing what HP USED TO DO: pushing down decision-making to the lowest possible level in a Federated form. This isn't perfect either but it's better than a Top-Down command-and-control system when it comes to performance and competitive fit.
Control-freak executives always screw both of these up because they can't handle letting go.
Some of the worst aspects of IBM's internal dysfunction survived Gerstner, even though he claimed to had fix them. It's hard to blame him personally, he's just one person. The important point is he perfectly identified the problems, which I find amazing considering most corporate CEO's don't have a clue about how their companies are actually operating.
My guess would be that it's the fate of all publicly listed companies. They tend to focus on short term shareholder value. A lot of family owned companies focus more on the long term.
Fair enough, but they had gone down quite far down that exact same road before Jobs came back and their current trajectory is a replay. It's a consumer gadgets company now, and one that hasn't really innovated in the last couple of years.
I'm sorry, the transition to their own silicon, and the tour de force that is the M1 doesn't count as innovation? What would count? We're literally discussing the failures of an iconic chip design firm, so it's particularly ironic that you would overlook this.
You mean like IBM is innovating when they use their own silicon? This is just vertical integration at its best, I'm not sure that M1 is a 'tour de force', to me it's just another CPU that was tailored for a specific niche.
The first ARM, that was a tour de force. The 6809 was too, and of course the 4004. But most other CPUs to me are run of the mill and the M1 is in that sense to me nothing special though of course it will give Apple a bit of an edge, but when all is said and done it's just another ARM based SoC.
At best it is an optimization, the most interesting part of the M1 is the dedicated neural net, and I'm not sure if anybody has already done something with that that the main CPU could not have done.
How not M1 is not an Innovation? I'm not claiming M1 is the first CPU to do it, I don't know, but they may have executed it flawlessly. Great performance, great battery life and great silence. If you don't call it innovation, at least call it breakthrough.
Nothing M1 did would have had an influence on what Intel is releasing now. The process of designing a chip takes years, a product that is releasing next week will have been functionally design complete a year ago.
IBM does a lot of research into basic chip technology and designs their own chips pushing forward with new architectures and features. Yes, of course they count. It boggles my mind that you think they wouldn't.
The M1 Laptop is currently in many factors miles ahead of its competition for the general consumer market, so calling it just tailored for a specific niche isn't really true.
> and the tour de force that is the M1 doesn't count as innovation?
Not really. You can't buy an M1. It's a part of a bigger product. Is the M1 impressive alone? Or is the MacBook and everything else including the M1 impressive?
And it's not the first chip Apple has done. It's just the first one for a small selection of computers. The M1 a tour de force? No.
The trajectory doesn’t seem comparable. Apple is the biggest company in the world, makes the best products in nearly every category they’ve entered (with real improvements every generation), and is still growing at a fast pace. If this is bad, I want to hear what good looks like.
> makes the best products in nearly every category they’ve entered (with real improvements every generation)
I like Apple at its best, but I wouldn’t say they’re the best in every (or even nearly every) category they’ve entered.
Yes they have iPhones (top range, even if they have competition from top range Android alternatives), iPads (peerless), Apple Watches (dominant), iPods dominated the MP3 market back when that was a thing anyone cared about…
But they also make really weird mice (Magic Mouse 2: charger on the bottom; many models: one button; Mighty Mouse: scroll nipple; iMac: unergonomic puck), the Apple TV and HomePod are nothing special (the latter is a pity, given I don’t trust their Voice Assistant competitors), and the device power cables have been infamously fragile for a long time.
Apple is becoming like Bentley (or similar higher-end car manufacturer), where the improvement is not in the feature but in the overall package. In other words, the package you get is the feature.
Some improvements like fit & finish, tolerances, endurance, so on are not visible on paper. Others like battery life and weight are visible on paper. When these combine with the xOS (where x={mac, i, iPad, tv}) ecosystem, whole thing becomes visible or palpable, one may say.
It's not an ubiquitous thing to have in technology space, and only possible with tight integration Apple provides, and strives to keep.
No edge in technology lasts forever, and once a new design is out it's much easier for others to copy. The fact is M1 was a nasty shock to their competition and was the biggest single advance in desktop/laptop CPU performance and power efficiency in a long time. If that doesn't count as innovative, I suspect there's some shifting around of goal posts happening.
Not old enough yet, if you count from when they were “reborn” in the 2000s with OS X, the iPod, and the iPhone. And you should count from there, because before that they were nearly bankrupt.
Spent years at Intel. Definitely saw what is described here. Must also add that it is a big company with no clear culture, so many people will have been in teams for years and not have encountered what skynetv2 is describing.
My two cents: Intel is full of people who are career oriented and not product oriented. Their main goal is to get a promotion, and often find means to do so without contributing anything meaningful to Intel's products. It's also full of senior leaders who believe strongly in credentialism[1], complexity[2], and style over substance (i.e. how the message is delivered vs the content).[3]
From a SW standpoint, I have not yet been in a team where all teammembers can handle branches.[4] This is quite acceptable.
In one team I was in, I was leading the efforts for a product that required features A and B by the customer. I was a junior member of the team with no domain knowledge, but I was somewhat of an expert for that customer's domain. Everyone was on board with the technical work. In every meeting we had for about a year, there would always be some person in the team who'd suggest things that would nullify feature B. I would have to remind them that we agreed to do features A and B." The response would always be "Oh, we're also doing feature B?"
The person who would say this varied from meeting to meeting. But I was very frustrated that they couldn't remember this basic fact, and often ended up writing code that had to be undone. And then deal with their frustration as if I had never mentioned feature B to them. I can understand if this happens once or twice, but I have to remind them in every meeting.
But this was normal behavior. I was the odd person who thought this was unacceptable.
Oh, and coming to meetings unprepared is the norm. No one will read your emails briefing them about the meeting ahead of time.
[1] "Let's hire the PhD with no experience and not the internal MS employee who's already doing the job they are hiring the PhD for"
[2] "I don't care if your code sped up our workflow by 5x. It's just what, 200 lines of code? Anyone can do that."
[3] Presentations break most rules of effective communications/presentations. A senior person once told me "You explained things too well, and your slides are fairly sparse. Fill it up with jargon and lots of plots, and don't explain it as well as you did. If senior management understands your work too easily, they will believe the work you did was trivial. If they have trouble understanding it, they'll be in awe."
[4] One former manager: "Every person will get his/her own private branch. Do all your experimental work there. There will be no more branches." A senior member in another team said "Why complicate things by adding new branches for our various experiments? Let's just keep it in the main branch and enable the different algorithms via command line flags."
>I have to remind them in every meeting.[...]But this was normal behavior. I was the odd person who thought this was unacceptable.
For perspective, this is typical LargeCorp behavior, especially at a company like Intel which makes 77 billion/year in revenues (700k/employee) at a 25% net income margin! The unwritten rule is that managers aren't incentivized to police this behavior. It's liberating to recognize this and modify one's approach.
> Oh, and coming to meetings unprepared is the norm. No one will read your emails briefing them about the meeting ahead of time.>A senior person once told me "You explained things too well[...] If they have trouble understanding it, they'll be in awe."
Wise words. In cultures that do this, you have to adapt and work more on narrative/story-telling. In many ways, this is how things actually work no matter how efficient you think you're making the team. Instead, write the same emails, but only to gather your thoughts. Then, lead the discussion. In large teams, it feels like this approach reminds me to avoid doing someone else's work.
> The unwritten rule is that managers aren't incentivized to police this behavior.
Unless it is the manager who has to constantly do the reminding. Then there is swift policing :-)
Yes, this is actually normal "human" behavior. But this level of extreme was ridiculous, even within Intel. I quickly left the team once the project was over. Life is too short.
> Wise words. In cultures that do this, you have to adapt and work more on narrative/story-telling.
Narrative/story telling is good, but is orthogonal to the issue here. The usual flow is to use narrative/story telling to explain the why (motivation, etc). However, some senior management will expect you to also talk about the details. And this is where the advice came in: "Put the details, and make sure they don't understand them." My sin was that I was presenting the details in a manner where they could understand it (without losing the nuances and details - I was mere presenting the same material "well").
A more severe example will enlighten: I once solved a challenging problem with a really simple solution. My manager had multiple sessions with me to coach me on how to present that simple solution in a much more complex way. He emphasized that senior management should not realize that the solution was simple - no matter how impactful it was.
Yes - this is also a general "human" problem, and is common in lots of places. However, when you're striving to be the best company in X, it is wise not to settle for "average".
Which has nothing to do with the methods of kingdom building addressed above. There is a massive difference between deceitfully contriving information asymmetry between departments and dispassionately directing a company's strategic objectives.
Someone who wanted to hang around fitness people, instead of datacenter people. Depressingly often strategy is based on personal preferences of upper management
I didn't understand why Google aquired Boutiques.com in 2010, but now it makes so much more sense. A lot of hot people joined the company with designer clothes, even though they didn't have to go through the usual interview process.
> A lot of hot people joined the company with designer clothes, even though they didn't have to go through the usual interview process.
That sounds ludicrous. If true, it has to count up there as among the most expensive way to get at "OPP" (in the Naughty By Nature sense). I don't see how the BoD and all sorts of DD processes could have signed off on it. Then again, I simply don't know how politics and power works at those levels, so I'd certainly welcome enlightenment.
I believe it, because I've seen it. At some level the marketing mangers just have carte blanche to hire hot people to be the face of the company. google/alphabet is not special in this regard.
Personally, I think Gelsinger ran VMWare into the ground. He completely missed the boat on cloud. Now he's taking over a company steeped in bureaucracy that's several generations behind on chip fabrication. They are still trying to ramp up 10 nm while TSMC is unveiling 4 nm any day now. I want to see Intel succeed because we need large US-based fabricator if China gets more aggressive on Taiwan, but it's going to take a lot of housecleaning.
>that's several generations behind on chip fabrication. They are still trying to ramp up 10 nm while TSMC is unveiling 4 nm any day now.
It's worth noting that Intel 10nm can't be directly compared with TSMC 4nm. AFAIK Intel 10nm approximately equals TSMC 7nm. TSMC 4nm sounds like an iterative refinement on 5nm, so intel is only really one to two generations behind, not "several".
Worth noting that TSMC is relentlessly incremental. While Intel wanted to make a bigger, slower jump from 14 to 10, TSMC did an N7, an N7+ (not to be confused with N7P), an N6, N5, and N5P, and later this year, N4 is on the roadmap.
So counting "generations" is a little abstract anyways. You should probably look at the PPA numbers and the kind of scaling boosters they've managed to ship, if you want to compare fairly.
TSMC 7nm = Intel 10nm is a crazy old trope that dates back to before everyone realised that Intel couldn't deliver 10nm. I don't know why people keep repeating it.
In your second link there's Power @ iso-speed comparisons within TSMC's nodes and TSMC has 3 revisions of their N7 with 3 different power usages. Presumably, Intel can do similar things within their generations too.
The industry should really use a different metric like "transistor count / area" because in 5 years TSMC will need to either go negative NM or 0.99 to 0.001 nm to keep the marketing slogans going.
That would only make sense if all transistors were the same size.
They're not. Not even close. Especially, in combinational logic they're all over the place.
When verilog is being turned into gates there are many arguments the engineer can give the tool and they will give very different results. It's kinda like picking which optimizations get turned on during compile, like picking speed or binary size, although not exactly like that.
The industry tried using bit/area, because an important advantage of small transistors is more memory/area, but that has basically all the same complications as well so it doesn't really work like one wants.
So just counting transistors can be totally unfair when two designs are made for totally different applications.
In the end picking a node is a complicated, multi-dimensional, decision. Often, and more often than the public thinks, the smallest one is the wrong decision. Note: Google's TPU is at least a generation behind the bleeding edge, if not two, and not because Google can't afford it if they want to.
So, basically expect the marking names to continue for process node names for the foreseeable future.
If you don't want to go through the whole thing note slide 17 especially. Those are all the same boolean function but they have very different areas, speeds, and power. Which one the tool picks will be (mostly) controlled by the designer.
The figure in your linked page looks like it is based on data from a survey in which participant were asked which number they thought of when they heard the word 'several'. Even if everyone taking the survey was aware that the word 'several' is inclusive of two, I'm not sure we would expect a different result. So, the results can't be used as evidence that anybody taking the survey did not think several is inclusive of two.
I've sufered their virtual cloud solution professionally, I concur. And not only they failed, while they focused on providing Software Defined Data Center solutions to enterprises, the enterprises were already trying to figure out how to actually kill the infrastructure cost. It wasn't too much of a software feature problem than it was a hardware cohesion problem, which VMware weren't even very good at providing support for to make their software actually be somewhat on par with say AWS which itself isn't particularly shiny in term of performance.
We were getting sub ms latency between applications on the same DC site, across all layers up to application. With VM were we were well over 2ms and at the virtualised network level. With marginal improvements on that front observed after 2 years of back and forth support projects.
>we need large US-based fabricator if China gets more aggressive on Taiwan
So take that concept further of what happens if/when China makes that move. What happens if TSMC's fabs are sabotaged/destroyed during this scenario? This would make today's chip shortage seem like just a blip. Would that be the thing that makes Intel relevant again? If this current shortage actually motivates US based chip manufacturing equal to TSMC, the loss of TSMC would obviously not be as severe. How much would China benefit from being the sole customer of TSMC at that point? Time to crowd source a new scifi/techfi plot
Now that you mention it, it's a little odd that VMware isn't used by any of the major cloud providers while they also haven't created their own Cloud Provider. It seems like they are very few hires away from a core competency in competing with at least GCP or Azure.
Makes me wonder why, is it because they see a big future yet in on-prem and can't really embrace cloud computing until that future looks less certain?
Because cloud is commoditized VMs and VMware doesn’t have an edge in managing cattle at scale.
Their bread and butter is in managing pets and cloud providers expressly don’t want to encourage people to bring their pets into the cloud because people get mad when a few get killed.
OpenStack with the KVM backend is better than VMware at multi-tenancy so there really isn’t much value for VMware to add even over open source state of the art IaaS.
As someone who works for a non-major datacenter. We use VMware and have evaluated OpenStack. Lots of moving parts for OpenStack, administrative burden. So we don’t use OpenStack.
OpenStack, VMware and all the rest should go through a process that audits and certifies them like what is done to transport systems on and off the ground while not getting in the way of step level improvements introducing innovative new nicer things.
Once you start charging enterprise prices you can't stop, but public clouds can't pay enterprise prices. It would have appeared to be financial suicide for VMware to move downmarket into public cloud.
Enterprise costs are negotiable. I can easily imagine Google paying VMware several billion per year for a product they want. I can’t imagine Google using ESX — they want a product they fully control.
I don't believe any bigger cloud providers want to adopt closed unmodifiable products as a core. Windows/Hyper-V is similar from outside but it's in house for Azure(MS).
The cloud is a capex game not a tech game. Also, the layer that VMware compete in, is fully commoditized (this is the infra as a service layer). The game is moving to higher layer (e.g. ML/ DB).
I think that for on-prem they faced a treat from containers, which they are embracing by betting the company on Kubernetes.
What is blocking this industry in the US? If it's staff, surely some kind of special visa program combined with tax breaks could be a pretty generous offer to TSMC to setup shop onshore.
The Taiwanese industry people want to go to China because that is where the market will go.
So much so that Taiwan had to block Chinese recruitment centers and if I'm not mistaken restricting travel on some of those key people.
That is very much like running it into the ground. They could have had a strong platform to pivot existing customers to cloud solutions or made on premises better. Instead a book retailer pivoted to that.
Every time Intel has a "revolution" they wind up pointed the same way, with less reason for hope. They've lived off blocking competition and stomping innovation for so long they have no idea how to do anything else. Kill the rotted zombie corpse and let people with ideas do shit.
Unfortunately, the most convincing of those people are Apple, who are manifestly trying to kill the personal computer as a (more or less) open software system even if they left themselves some space to backtrack. Though Intel are far from blameless in that department as well, they look far less likely to actually succeed. I find it hard to cheer for any of the players here.
I'm quite bullish on Intel. Their upcoming products are awesome (at least according to rumors/what they've announced) and heterogenous computing is a bigger deal than most realize. AMD Ryzen 3 and Apple's M1 get a lot of love and many speculate on what's coming next. I think there's space for several players in the space.
> "We evaluated Intel's latest generation of 'Ice Lake' Xeon processors," Howells wrote. "Although Intel's chips were able to compete with AMD in terms of raw performance, the power consumption was several hundred watts higher per server – that's enormous."
Lake has turned out to be Intel's decline into Bulldozer chip style design issues. They might perform fine but they're too hot, too outdated and too expensive.
Errr... yes and no. The Equinixes, Digital Realty’s of the world still operate DCs that have a lot of Low hanging fruit in terms of efficiency. Running your DC at 21C in a tropical or sub tropical climate is not efficient but it’s part of the SLA. Owner operators like FB and Goog run their DCs warmer. The equipment can handle higher operating temps (to a point) and the savings in IT load make up for the uptick in failure.
I'm sure regionally there are differences, but by far the most important considerations are power and cooling, both of which are a function of efficiency.
Island is quite attractive and have a few data centers (Google for sure and IIRC some crypto companies).
For CloudFlare, their workload is bandwidth intensive. So it's bad battle field for Intel because AMD offers 128 lanes PCIe on single socket. It seems that even Sapphire Rapids isn't looks good. But the weakness isn't applied to all use cases.
What they want is high throughput per watt, just as they say, hundreds of Watts more per unit that stand up is a lot at scale. Infrastructure providers are eating up business purchases, and they do care about what used to be a blip in accounting with the electricity and estate bill, now becoming a significant part of their costs which must be optimised to compete with margin. Energy efficiency is more and more important, Intel got Apple's love with their core architecture, killing Ibm/Motorola in the mobile space. I think the same fate will happen to Intel in the server space if they don't deliver an energy efficient architecture. With the headways the apple silicon has made recently, it seems improbable that Intel will catch up.
On paper, saphire rapids is a large step forward.
Compute tiles to offer larger core counts (while maintaining a monolithic topology), 512 reorder buffer entries (up from 354), 6 decode (up from 4), 2 more execution units, chips with HBM memory...
AMX might be interesting, too, although I currently stick to `Float32`+ precision, so I don't have use at the moment for <=16-bit FLOPS.
Overall, saphire rapids looks very interesting to me.
Of course, being built on an upgrade of the same node as Ice Lake means heat/energy use aren't getting better, so perf/watt gains will probably mostly have to happen on the perf side of things.
Tiger Lake clocks much higher than Ice Lake, but I'm not sure how their energy use compares at the same clock speeds. Saphire Rapids is another node upgrade on top of Tiger Lake.
Ice Lake Server is still made on the 2nd variant of the 10-nm Intel process (the 1st being that used in the pathetic Cannon Lake).
Because of that, it is much less efficient than Tiger Lake, which is made on the 3rd variant of 10-nm Intel process. Tiger Lake is still less efficient than the CPUs made with the TSMC 7-nm process, like Zen 3.
Sapphire Rapids will be made with the 4th variant of the 10-nm Intel process, the same as Alder Lake, now renamed as "Intel 7".
So Sapphire Rapids is 2 generations above the current Ice Lake Server and it should be much more efficient. We still have to see how Alder Lake will behave, but there is a chance that "Intel 7" might be more efficient than the now old 7-nm TSMC process.
However, when Sapphire Rapids will become available next year, AMD will probably soon launch Zen 4, using the 5-nm TSMC process, so it might become again better than Intel.
These days if I hear rumors that Intel might be about to do something bold, I don't hold my breath. I'm not saying they definitely won't find a way to recover from this current stagnation, but I've been disappointed enough times now that I'll believe it only if and when I see it on shelves and in real world benchmarks.
Isn't Intel's loss of the lead largely because their fabs don't produce as many square metres of chips as TSMC and Samsung's fabs? Once those guys had more scale it was only a matter of time until their fabs became better than Intel's. Once they took the lead, the problem only compounds, since people are switching to AMD or ARM based chips and that moves even more of Intel's scale to TSMC and Samsung. It seems like a vicious circle for Intel.
Admittedly Intel having its own fabs looks good while everyone else is struggling for capacity, but I assume that will be fixed eventually. When it is, I expect Intel's decline to hasten.
Intel are already experimenting with using other people's fabs. I think this is their best path.
Excellent range of commentary on this topic - no matter where one stands regarding Intel, it is an important player in the great game of semiconductor strategy, given the supply chain logistics mess due to the pandemic and the ongoing US/China rivalry. A lot at stake here.
They have grand plans. They've tried these exact same grand plans a decade ago and failed badly.
The culture hasn't changed. In fact the minority that warned of exactly what has happened to Intel - the Cassandras who got it right - have all already been driven out of the company. It's now just one single group-think of fail.
The proof is in the results which we will NOT KNOW for at least 10, maybe 20 years. The signs so far are not encouraging.
Microsoft is a very bad comparison given in the article.
Microsoft haven't found a radically better way to sell MS Office, or Windows. It's just leveraged what Mr. Balmer built, an extensive sales network, and monster salesmen who make car salesmen look nice, to upsell big, fat clients into buying grossly overpriced web hosting (Azura.)
The best direct comparison would be if Intel starts force selling their corporate buyers an extremely overpriced Windows clone.
We had a visit by one MS salesman trying to forcefeed us 365, and cloud AD, instead of out self-hosted AD. We almost had him escorted out.
honestly it's 100% AMD that's going to turn this around for intel. AMD has stopped selling mid-market & down-market. their new affordable 5300G chip is unpurchaseable. their gpus are unobtainable in extreme, or radically overpriced.
amd still has a good presence in affordable, good performing latpops. but that's it. everywhere else, they're basically only courting the high end. they got better & immediately stopped trying to serve the mainstream markets.
for example, go try to buy a business class small PC. there was a brief second where one could buy a 1L sized Lenovo with a 4850ge chip, and it was awesome, way better than intel. but AMD very very very quickly withdrew from price-competitive markets.
Because Fab space has been at a high premium for some time, many manufacturers are focusing on producing higher priced (and higher margin/higher profit) items, maybe that is why you see less availability of down market AMD products? Also, AMD was pretty much sold out across the board beginning of the year; there was basically no availability of 5000 series in most retailers, and limits on CPUs per customer when they do come in stock.
Once the semiconductor situation settles down and it all goes back to normal, you'll probably see more availability of AMD on the lower tiers - the situation is already getting better with most of the 5600 (and higher) series of cpus back in stock now.
Sort of, but this is also sort of the point of owning your own fab!
There are others. What happens if TSMC has an unchallenged kingmaking node? Do they let AMD/Intel or AMD/NVidia keep their fat margins? Or do they say "the node makes the king -- start bidding"?
I'm still on the waitlist for the 5950 since Christmas. I bought an AMD laptop meanwhile, but never bothered removing my name from the list. I should start a pool on how long it takes before they contact me.
FWIW, 5950x started being regularly in stock at the local BestBuy (USA, California, Bay Area) so depending on where you are, you might want to look around.
As for me, ironically, once I could finally get it, I decided that I might as well await the next spin. Chipmageddon appears to have taught me patience.
It's amazing how Intel with exclusive access to their own massive fabs has utterly squandered this chips shortage, even in the midst of it all hitching its wagon to the shortest of all TSMC. Stock is just down down down and in 2024 they'll announce another far behind energy guzzler - some turnaround..
I think Intel is shipping every chip they can make at 14 nm? They are selling more chips than ever before. If they could get newer designs and processes out that would be nice, to but I don't think they are sitting on production capacity.
I don't have a source, other than their financials.
Well and this:
https://www.tomshardware.com/news/intel-cpu-shortage-tsmc-ou...
The problem is Intel makes chips for Intel and nobody else. Sure they are starting to open up again.. but there are only so many use cases for x86 cpus.
Like literally every single chip that is in shortage is something Intel does not and will not make.. well they do have Altera but still..
Intel's been deep into their own shortage for years, because 10nm was almost unusable for most of its life. 2020 just brought everyone to their level, it didn't give them an advantage.